Pages

Wednesday, December 28, 2011

Tiny Tests: User Research You Can Do NOW!

There’s a lot of advice about how to do great user research. I have some pretty strong opinions about it myself.

But, as with exercise, the best kind of research is the kind that you actually DO.

So, in the interests of getting some good feedback from your users right now, I have some suggestions for Tiny Tests. These are types of research that you could do right this second with very little preparation on your part.

What Is a Tiny Test?

Tiny Tests do not take a lot of time. They don’t take a lot of money. All they take is a commitment to learning something from your users today.

Pick a Tiny Test that applies to your product and get out and run one right now. Oh, ok. You can wait until you finish the post.

Unmoderated Tests

Dozens of companies now exist that allow you to run an unmoderated test in a few minutes. I’ve used UserTesting.com many times and gotten some great results really quickly. I’ve also heard good things about Loop11 and several others, so feel free to pick the one that you like best.

What you do is come up with a few tasks that you want to see people perform with your product. When the test is over, you get screen captures of people trying to do those things while they narrate the experience.

Typically, I’ll use remote, unmoderated testing when I want to get some quick insight into whether a new feature is usable and obvious for a brand new user.

For example, if you’ve just added the ability for users to message each other on your site, you can use remote, unmoderated testing to watch people attempt to message somebody. This will help you identify the places where they’re getting lost or confused.

If you’ve done a little recruiting and have a list of users who are willing to participate, you can even ask your own users to be the participants.

And don’t forget, if you don’t have a product, or if you’re looking at other products for inspiration, you can run an unmoderated test on a competitor’s product. This can be a great way to see if a particular implementation of a feature is usable without ever having to write a line of code. It can also be a great way to understand where there might be problems with competing products that you can exploit.

Are you going to get as much in-depth, targeted feedback as you would if you ran a really well designed, in person user test? Probably not. But it’ll take you 10 minutes to set up and 15 minutes to watch each video, so you might actually do this.

Remote Observation

There is something to be said for traveling to visit your users and spending time in their homes or offices. It can be extremely educational. It can also be extremely expensive and time consuming.

Here’s a way to get a lot of value with fewer frequent flyer miles.

Look at the people in your Skype contacts. Find one that doesn’t know much about your product. Ping them. Ask them to do three small tasks on your product while sharing their screen.

Don’t have Skype? Send friends a GoToMeeting or a WebEx link through email.

As with the remote unmoderated testing, this is best for figuring out if something is confusing or hard to do. It’s not very useful for figuring out whether people will like or use new features, because typically the people in your Skype contacts aren’t representative of real users of your product.

The closer the people are to your target market, the better the feedback’s going to be, but almost anybody can tell you if something is hard to use, and that’s information that it would be great if you had right now.

Coffee Shop Guerrilla Testing

Of course, it’s tough to test a mobile app over Skype. You know where it’s easy to test a mobile app? At a coffee shop.

Go outside. Find a Starbucks (other coffee shops are also acceptable if you refuse to go to Starbucks, you insufferable snob). Buy some $5 gift cards. Offer to buy people coffee if they spend 5 minutes looking at your product. Have a few tasks in mind that you want them to perform.

In about an hour, you can watch a dozen people use your app. And if you don’t manage to get any good feedback, at least you can get coffee. But you’ll almost certainly get some good feedback.

This type of feedback is great for telling you if a particular task is hard or confusing. It’s also great for getting first impressions of what an app does or the type of person who might use it.

Five Second Landing Page Testing

Sometimes, all you want to test is a new landing page. What you frequently want to know about a landing page is, “What message is this conveying, and is it conveying it clearly and quickly?” Even the tiniest of tests can seem like overkill for that.

For landing pages, I use UsabilityHub’s Five Second Test. You take a screenshot or mockup of the landing page you want to show. You upload it to the site. You enter a few questions you want people to answer after looking at it.

If the whole setup process takes you more than 5 minutes, you’re doing it wrong, and within a few hours you can have dozens of people look at your landing page and tell you what they think your product does.

This sort of Tiny Test is wonderful for testing several different variations of messages or images that you might put on a landing page. You can get people’s real first impressions of what they think you’re trying to tell them.

CTA Testing

The most important thing to get right on any screen is the Call To Action. After all, you can have the most gorgeously designed images with a wonderfully crafted message, but if people can’t find the damn Buy button, you’re screwed.

But, as with the landing page tests, this is something that takes 5 seconds. Basically, you want to show people a screen and see if they can figure out where they should click. Guerrilla testing works pretty well for this, but even that may be overkill here.

For CTA testing, I often use UsabilityHub’s ClickTest product. Again, you just upload a mock and ask people something like, “Where would you click to purchase the product shown on this page?” or “Where would you go to advance to the next slide?” or whatever CTA you’re testing.

A few hours later, you get a map of where people clicked. If there are clicks all over the place, you’ve got some work to do on your CTA.

The advantage to doing something like this over a/b testing is simply that you can get it set up very quickly with just mockups. You don't have to actually implement anything on your site (or even have a site) in order to test this way. But, if you have enough traffic and a good a/b system already set up, by all means test that way, as well.

What Are You Waiting For?

There you go. Five different options for wildly fast, incredibly cheap feedback on your product. You don’t have to hire a recruiter or write a discussion guide or rent out a usability lab. In a few cases, you don’t even have to interact with a human.

Are they perfect? Do they take the place of more substantial research? Will you be able to get away with avoiding talking to your users forever? No. But they’re easy, and you can do one of them right this second.

So...do one of them right this second!

Like the post? Follow me on Twitter.

Sunday, December 11, 2011

STFU About What Women Want

In a recent post on TechCrunch, Penelope Trunk tells us (again) that most women don’t want to do startups.

First, I’d like to extend that to Asians, African Americans, Gays, and Latinos. Oh, and white men. Most of them don’t want to do startups either, because most people don’t want to do startups for a whole host of reasons.

Penelope tells us that women are different though, because women don’t want to join startups because women want to have babies. As evidence, she points out that most women downshift their careers as soon as they have babies, which of course makes startups impossible.

It’s not that women don’t join startups because of lack of opportunity or sexism or doing what’s expected of them or anything else. Now that we have completed defeated bias, all women can choose to do anything they want, and they are choosing to have babies rather than go to startups. Case closed!

Here’s the problem: Penelope, and other people who say things like this, are making my life a whole lot harder, and I’d like them to knock it the fuck off.

I’m not going to argue that most women don’t want to stay home with their children. Frankly, I don’t care what most women want to do.

I know what I want to do, and what I want to do is to work at startups. I don’t want to have children. I’ve never wanted children. I never will want children, and I certainly wouldn’t want to give up working at startups for them.

So, when a publication like TechCrunch spews some nonsense about what women want, it means that the next time I go into an interview with a male founder (and they are overwhelmingly male for some reason that I’m not going to address here, but that Penelope assures us has nothing to do with bias) who has read that nonsense, he may be thinking, consciously or subconsciously, “she doesn’t really want to work at this startup because she wants to have a baby.”

And frankly, that sucks for me and all the other women like me. Oh, did I mention that there are lots of other women like me? There are.

But let’s just look for a moment at what all of these other women, the ones with babies and without startups, are choosing to do. They are choosing to stay home because of...I don’t know what the current argument is. Hormones? Biology? Bad government policy? Nature?

It couldn’t possibly be bias or lack of opportunity because, of course, some women are choosing to work at startups, so it would be trivially easy for all women to choose to work at startups, right?

Except that my father’s law school class of 1963 had 3 women in it. That’s right. Three. Now, clearly more women could have joined the class. His law school didn’t have a 3 woman quota or anything.

But the women of 1963 chose not to go to law school. And I’m positive that there were all sorts of blowhards opining that women didn’t go to law school because they were too busy having babies, and this was perfectly normal, and we shouldn’t do a damn thing to promote more women going to law school because WON’T SOMEBODY PLEASE THINK OF THE CHILDREN.

After all, we weren’t actively STOPPING women from going to law school (any longer). It was their choice! Except that, as Penelope points out, more than 50% of the law school graduates now are women.

So, far more women are now choosing to be lawyers. You know, despite the fact that they still have babies. What women wanted, with regard to law school attendance, somehow changed between 1963 and today.

Similarly, long before women had the right to vote in the US, many women didn’t actually want the right to vote. Some even felt that women were biologically not capable of voting well. And for years after they had the right to vote, many people still felt that it was the wrong decision.

How many women in the US do you know who don’t care about voting? Fewer than felt that way a hundred years ago, right?

The point is that “what women want” changes over time. What people want changes over time. Because what we want is hugely driven by social norms and massive cultural shifts and all sorts of things that may seem biological at the time but turn out not to be.

In other words, if suddenly there are a ton of women at startups kicking ass and being awesome, it might turn out that more young women want to join startups in the future. And 25 years from now, we’ll all be laughing at the idiots who said things like “women don’t want to vote...er...go to law school...I mean...join startups!”

Penelope, do you vote? Do you know women who went to law school? I do. And I am forever grateful to the women who fought not just for the right to do these things but to make them seem like totally normal things to do.

I salute the women who said, “Hey, wait a minute. Maybe having a vagina doesn’t determine what I have to want from life! Just because a lot of women want something, doesn’t mean that I have to want the exact same thing!

And I’m still a little annoyed at the women who said, “Oh, women don’t WANT to vote. Voting is for men!” I take that back. I’m a lot annoyed at them. They sucked.

So stop doing it. Stop assuming other women want to make the same choices you do, especially when society has such an enormous and invisible impact on your choices. Stop assuming a young woman just starting her career knows everything about all of the wonderful, exciting career choices she could make.

And mostly, stop making it ok for other people to assume that I want what you want. That’s clearly not true, since what I want most right at this moment is to punch you in the face.

I am a woman. I want to work at a startup. I don’t want to have children. I want to vote. I want to wear stiletto heels and write jQuery, sometimes at the same time. In other words, I am an individual, and I have all sorts of wants that are neither determined nor predicted by my gender.

I am a woman, Penelope, but you don’t have any idea what I want. So, kindly shut the fuck up about it.

Thursday, December 1, 2011

Give the Users What They Really Want

Recently, I’ve been trying to teach startups how to do their own user research. I’ve noticed that I teach a lot of the same things over and over again, since there are a few things about research that seem to be especially difficult for new folks.

One of the most common problems, and possibly the toughest one to overcome, is the tendency to accept solutions from users without understanding the underlying problem. In other words, a user says, “I want x feature,” and instead of learning why they want that feature, new researchers tend to write down, “users want x feature," and then move on.

This is a huge issue with novices performing research, When you do this, you are letting your users design your product for you, and this is bad because, in general, users are terrible at design.

Ooh! An Example!


I participated in some user research for a company with an expensive set of products and services. Users coming to the company’s website were looking for information so they could properly evaluate which set of products and services was right for them. Typically, users ended up buying a custom package of products and services.

One thing we heard from several users was that they really wanted more case studies. Case studies, they said, were extremely helpful.

Now, if you’re conducting user research, and a customer tells you that he wants case studies, this might sound like a great idea.

Unfortunately, the user has just presented you with a solution, not a problem. The reason that this is important is that, based on what the actual underlying problem is, there might be several better solutions available to you.

When we followed up on users’ requests for case studies with the question, “Why do you want to see case studies?” we got a variety of answers. Interestingly, the users asking for case studies were all trying to solve entirely different problems. But were case studies really the best solution for all three problems?

These were the responses along with some analysis.

“I want to know what other companies similar to mine are doing so that I have a good idea of what I should buy.”


The first user’s “problem” was that he didn’t know how to pick the optimal collection of products for his company. This is a choice problem. It’s like when you’re trying to buy a new home theater system, and you have to make a bunch of interrelated decisions about very expensive items that you probably don’t know much about.

While case studies can certainly be helpful in these instances, it’s often more effective to solve choice problems with some sort of recommendation engine or a selection of pre-set packages.

Both of these more quickly help the user understand what the right selection is for him rather than just give him a long explanation of how somebody else found a good solution that might or might not be applicable to the user.

“I want to know what sorts of benefits other companies got from the purchase so I can tell whether it’s worth buying.”


The second user’s “problem” was that he wanted to make sure that he was getting a good value for his money. This is a metrics problem. It’s like when you’re trying to figure out if it’s worth it to buy the more expensive stereo system. You need to understand exactly what you’re getting for your money with each system and then balance the benefits vs the cost.

This problem might have been solved by a price matrix showing exactly what benefits were offered for different products. Alternatively, it would be faster and more effective to display only the pertinent part of the case studies on the product description page - for example, “Customers saw an average of 35% increase in revenue 6 months after installing this product.”

By boiling this down to only the parts of the case study that were actually important to the user, it gives you more flexibility to show this information - statistics, metrics, etc. - in more prominent and pertinent places on the site. This actually increases the impact of these numbers and improves the chance that people will see them.

“I want to see what other sorts of companies you work with so that I can decide whether you have a reputable company.”


The third user’s “problem” was that he hadn’t ever heard of the company selling the products. Since they were expensive products, he wanted the reassurance that companies he had heard of were already clients. This is a social proof problem. It’s like when you’re trying to pick somebody to put a new roof on your house, so you ask your friends for recommendations.

His actual problem could have been solved a lot quicker with a carousel of short client testimonials. Why go to all the trouble of writing up several big case studies when all the user cares about is seeing a Google logo in your client list?

Why This Matters

This shouldn’t come as a surprise to any of you, but users ask for things they’re familiar with, not necessarily what would be best for them. If a user has seen something like case studies before, when he thinks about the value he got from case studies, he’s going to ask for more of the same. He’s not necessarily going to just ask for the part of the case study that was most pertinent to him.

The problem with this is that many people who might also find certain parts of case studies compelling won’t bother to read them because case studies can be quite long or because the user doesn’t think that the particular case study applies to him.

Obviously, this is applicable to a lot more than case studies. For example, I recently saw a very similar situation from buyers and sellers in a social marketplace asking for a “reputation system” when what they really wanted was some sort of reassurance that they wouldn’t get ripped off. I could name a dozen other examples.

The takeaway is that, when somebody asks you for a feature, you need to follow up with questions about why they want the feature, even when you think you already know the answer!

Once you know what their problems really are, you can go about solving them in the most efficient, effective way, rather than the way the user just happened to think of in the study.

Instead of just building what the user asks for, build something that solves the user’s real problem. As an added bonus, you might end up building a smaller, easier feature than the one the user asked for.

Wednesday, November 16, 2011

The Art of the UX Steal

I’ve been building interfaces for a very long time, and I can tell you that the number of times I’ve had to solve a completely new and unusual user problem is remarkably small. This isn’t surprising. The vast majority of products we build incorporate a lot of familiar elements.

For example, think about the number of products you use that include one or more of the following: login, purchasing, comments, rating systems, order history, inventory management, or user generated content.

Do you expect that every single login experience gets redesigned completely from scratch in a vacuum? Of course not! It would be annoying if they were, since each new version would almost certainly differ just enough to make things confusing. Having design standards for things like logging in makes a lot of sense for both users and designers.

However, this tendency to fall back on patterns, or just to copy whatever Apple/Amazon/Facebook is doing, can cause some problems, especially for startups. There are a few big reasons why you shouldn’t just adopt another company’s solution without serious consideration.

They May Not Want Exactly What You Want


Companies have hidden agendas. But their agenda is not always your agenda, which means that their optimal design is not your optimal design. And if you think that they’re always optimizing for the best user experience, you’ve lost your damn mind.

Want an example? Ok! Have you ever purchased an item and been opted in to receiving email deals from the company’s partner sites? As a user, who likes that? Who thinks that’s a great user experience? Exactly.

Then why do companies do it? They do it because they have made the business (not UX) decision that they make more money by opting people into partner deals than they lose by slightly annoying their customers. That’s a totally reasonable calculation for them to do.

Now, let’s say your biz dev person comes to you and says he wants to add that feature to your checkout process because he has a partner lined up who is willing to pay for the privilege of getting your users’ email addresses. He says it will be ok to add the feature because other big companies are doing it, so it must make money.

But you have no idea how much money they’re getting for making their UX worse. You have no idea of the number of users they may be losing with this practice. And even if you did know their numbers, you can’t decide whether this feature is the right business decision for you until you know what those numbers are going to be for your product.

In an ideal world we could always just choose whatever made the best possible user experience, but realistically, we make these kinds of business/UX tradeoffs all the time. They’re inevitable. Just make sure that you’re making them based on realistic estimates for your product and not on the theory that it’s right because a bigger company is doing it.

They Don’t Do Exactly What You Do


By my count, Amazon has sold at least one of pretty much everything in the world. Ok, I’m just extrapolating from my purchase habits, but you know what I mean.

Not only do they sell products directly, they also allow other companies and individuals to sell through their marketplace. They also sell a lot of different versions of the same product. This makes their product pages pretty complicated.

Does your product do all of those things? If you work for a startup, I certainly hope not, since many of Amazon’s features were added over the course of more than a decade.

If your product doesn’t have all those features, then you might want to do all sorts of things differently than Amazon does. For example, your product pages could be significantly simpler, right? They could emphasize entirely different things or have clearer Calls to Action or more social proof because they don’t need to account for all of Amazon’s different features.

Whether or not you even have product pages, the point is that no other company is doing exactly what you’re doing (or if they are, you have an entirely different problem), so their optimal UX is, by necessity, going to be different from yours.

They Can Get Away with It


If Dante were writing today, the 9th circle of Hell would have involved trying to sign into multiple Google accounts at once. True story.

A friend of mine decided to make me angry the other day, so he showed me a Google docs screen where the Save button was so cleverly hidden it took him several minutes to locate it. This was on a screen that had maybe four elements, and he’s a very senior software engineer, so this probably wasn’t user error. I find the usability on certain Google products almost sadistically poor.

But I put up with it because Google provides me with incredible value for free that I can’t get anywhere else even by paying for it.

I don’t use things like Google docs for their UX. In fact, I use them in spite of large portions of their UX. And if your UX borrows from Google through some misguided notion that just because Google does it, it must be right, I will quit your product in a freaking heartbeat and bad mouth it to all my friends.

The moral of this story isn’t just “don’t steal UX from Google,” although that’s not bad advice. The moral is that very few companies succeed in spite of their UX, and if you happen to steal UX from them, you’re doing it wrong.

On a side note, you know what had a fabulous UX? The original Google product - the one where there was just a single search box, two buttons, and a hugely successful algorithm for finding great results. Unsurprisingly, that’s the UX that got us all hooked in the first place.

The Right Way to Steal


Now that the horror stories are out of the way, you still shouldn’t be coming up with every design element from scratch.

Not only is it ok to steal a basic login process from another product (although not Google), it’s almost certainly the best possible thing you could do. Having a non-standard way for users to log in to your product is just needlessly confusing.

One product I use on a regular basis used to put their Log In button on the top left of their home page instead of the top right. Just this little change meant that several times I had a hard time remembering how to get into the product, and wasted several seconds searching for the button. I probably wasn’t the only one to complain, since they fixed it relatively quickly.

Logging in isn’t the only thing to standardize. Any time you have a simple activity that users do regularly in lots of other products, you should at least check to see whether there is a standard and consider adopting it.

Of course, you can always choose not to do things the way everybody else is doing them, but you should have a very strong reason for changing things, and you should definitely a/b test your change against the standard design pattern.

Trust But Verify


Most importantly, when you are planning on stealing - or “adopting a standard” as we’re now going to euphemistically call it - it’s still important to test it.

I like to do quick qualitative tests to observe some people actually using the standard. In fact, often I’ll test the standard on competitors’ products before implementing it, rather than implementing it and then finding out that it’s crap. Then, I’ll test again once it’s implemented in my product.

In general, the more companies who are doing things identically the less likely it is to be confusing. But it’s still necessary to make sure that the design works in the context of the rest of your product.

Like the post? Follow me on Twitter!

Thursday, November 3, 2011

Idiots, Drama Queens, and Scammers - Improving the Customer Service with UX

I recently published another article in Smashing Magazine. This one is titled Idiots, Drama Queens, and Scammers - Improving the Customer Experience with UX.

Here's an excerpt:
User experience design isn’t just about building wireframes and Photoshop mock-ups. It extends to areas that you wouldn’t necessarily think are part of the discipline.

For example, your customer service department can have a huge impact on your website’s overall user experience. Similarly, the design of your user experience could have an awfully big effect on your customer service department. Of course, not all of your users will interact with the customer service department, but for those who do, their experience can improve or destroy the customer relationship.


Read more now >

Wednesday, September 21, 2011

How Metrics Can Make You a Better Designer

I have another new article in Smashing Magazine's UX section: How Metrics Can Make You a Better Designer.

Here's a little sample:

Metrics can be a touchy subject in design. When I say things like, “Designers should embrace A/B testing” or “Metrics can improve design,” I often hear concerns.

Many designers tell me they feel that metrics displace creativity or create a paint-by-numbers scenario. They don’t want their training and intuition to be overruled by what a chart says a link color should be.

These are valid concerns, if your company thinks it can replace design with metrics. But if you use them correctly, metrics can vastly improve design and make you an even better designer.


Read the rest here >

Thursday, September 15, 2011

Need Help with Your Design and Research?

I used to do a lot of design and research for companies. Don't get me wrong. I still do design and research, but I’ve recently made a pretty significant change.

I no longer do design and research FOR companies. I now do design and research WITH companies.

I promise this isn’t just semantic nonsense. It has a huge impact on my relationship with clients, and I think it has some good lessons for people who choose to work with outside UX help.

Give a Company a Fish


Let’s take a look at the typical experience you have when you hire a contractor or an agency. Typically, you give a lot of input to the contractor about what results you want, and the contractor goes off and produces something that hopefully fits those results. 

With a good contractor, you get a lot of discussion and iteration, but at the end, you get a design or a research report that somebody did for you. And that’s all you get.

If you want to change part of the design after the contractor is gone, you run the risk of making major mistakes, because you are very unlikely to understand all the decisions that were made in creating it. If you have a question about the research or want to do a quick follow up about something you learned, you don’t know how to do that yourself.

This means that the next time you want some research or design done, you need to hire somebody to do it for you again. This is great for the contractor, and it’s not bad for companies with big budgets, but it can be especially hard for startups.

Going Fishing Together


Last year, I decided to try a different model. When I was hired by clients, I came in and worked as part of the team. I was still doing the majority of the design and research, but I came in and worked at the office and tried to be integrated into the teams as much as possible.

That worked better than the old agency style I was used to. I had more contact with the engineers and product owners. We could iterate on the design faster because we were all in the same room. I learned far more about the product and users. Sometimes they learned a little about the design process.

Still, with some clients, I found that I was the only person in the room while doing customer research. I was the only one coming up with questions I wanted answered. I was still having to schedule design reviews rather than having everybody involved in the design process.

The worst part was that I was the only one learning anything about the customers. But they weren't MY customers!

Too often, what this meant was, when a project was over, everything at the company went right back to where it was before.

I started to look at why some projects ended this way, while in others, the companies seemed to incorporate good design and research skills into their own development process.

Teach a Company to Fish


Based on what I learned from the companies who improved, I have a different model now for all of my new clients. I’m helping companies learn to do more design and research on their own.

Instead of running a research study, I help product owners figure out what sort of research they need to do. I then help them plan it, execute it, analyze the data, and create actionable designs. If this were a sports team, I’d be a coach, not a ringer.

Of course, this does mean a lot more work for my clients. They have to figure out what questions they want answered. They have to talk to their customers. They have to do design work. They have to understand the process. It’s really hard, and not everybody wants to learn to do these things.

But the beauty of it is, once they’ve done it a few times, it all gets easier. It becomes part of the company process. More people in the company become interested in conducting research and creating designs.

Of course, eventually, my clients won’t need me any longer. It may not be the best business model, but I think it’s the best thing I can do for my clients.

What This Could Mean for You


This means that I can help you learn how to be better at research and design. For example, I can work with you on things like:

  • Which type of research is right for you at any given stage of your product development
  • How to plan that research correctly
  • How to moderate a user discussion properly
  • How to analyze your research results
  • How to create usable personas and write good user stories
  • How to turn research results into actionable designs
  • What changes you need to make to your product based on your results
  • When to use metrics and a/b testing in your design process
  • What to build now, what to test, and what to iterate on later

If you’re interested in any of those things, you should contact me at laura@usersknow.com. I’m happy to discuss the process in more detail and explain a typical engagement.

Friday, September 2, 2011

Why Your Test Results Don't Add Up and What To Do About It

Check out my guest blog post for KISSmetrics: Why Website Test Results Don’t Always Add Up & What To Do About It!

Here's a little sample:

If you do enough A/B testing, I promise that you will eventually have some variation of this problem:

You run a test. You see a 10% increase in conversion. You run a different, unrelated test. You see a 20% increase in conversion. You roll both winning branches out to 100% of your customers. You donʼt see a 30% increase in conversion.

Why? In every world Iʼve ever inhabited, 10 plus 20 equals 30, right? Youʼve proven that both changes youʼve made are improvements. Why arenʼt you seeing the expected overall increase in conversions when you roll them both out?


Read the Rest at KISSmetrics.


Thursday, August 18, 2011

Breaking the Rules: A UX Case Study

Recently, I was lucky enough to be featured in Smashing Magazine's brand new UX section! Smashing is already a fabulous resource for web design and coding, and I think it's going to be a great place to learn about user experience.

You should read my first article, Breaking the Rules: A UX Case Study.

Here's a little something to get you started:

I read a lot of design articles about best practices for improving the flow of sign-up forms. Most of these articles offer great advice, such as minimizing the number of steps, asking for as little information up front as possible, and providing clear feedback on the status of the user’s data.

If you’re creating a sign-up form, you could do worse than to follow all of these guidelines. On the other hand, you could do a lot better.

Design guidelines aren’t one size fits all. Sometimes you can improve a process by breaking a few rules. The trick is knowing which rules to break for a particular project.


Read the rest of the article!

Tuesday, August 9, 2011

Stop Worrying About the Cupholders

Every startup I’ve ever talked to has too few resources. Programmers, money, marketing...you name it, startups don’t have enough of it.

When you don’t have enough resources, prioritization becomes even more important. You don’t have the luxury to execute every single great idea that you have. You need to pick and choose, and the life of your company depends on choosing wisely.

Why is it that so many startups work so hard on the wrong stuff?

By “the wrong stuff” I mean, of course, stuff that doesn’t move a key metric - projects that don’t convert people into new users or increase revenue or drive retention. And it’s especially problematic for new startups, since they are often missing really important features that would drive all those key metrics.

It’s as if they had a car without any brakes, and they’re worried about building the perfect cupholder.

For some reason, when you’re in the middle of choosing features for your product, it can be really hard to distinguish between brakes and cupholders. How do you do it?

You need to start by asking (and answering) two simple questions:
  • What problem is this solving?
  • How important is this problem in relation to the other problems I have to solve?
To accurately answer these questions, it helps to be able to identify some things that frequently get worked on that just don’t have that big of a return. So, what does a cupholder project look like? It often looks like:

Visual Design

Visual design can be incredibly important, but nine times out of ten, it’s a cupholder. Obviously colors, fonts, and layout can affect things like conversion, but it’s typically an optimization of conversion rather than a conversion driver.

For example, the fact that you allow users to buy things on your website at all has a much bigger impact on revenue than the color of the buy button. Maybe that’s an extreme example, but I’ve seen too many companies spending time quibbling over the visual design of incredibly important features, which just ends up delaying the release of these features.

Go ahead. Make your site pretty. Some of that visual improvement may even contribute to key metrics. But every time you put off releasing a feature in order to make sure that you’ve got exactly the right gradient, ask yourself, “Am I redesigning a cupholder here, or am I turbocharging the engine?”

Monday, August 1, 2011

Hypothesis Generation vs. Validation

A lot of people ask me what sort of research they should be doing on their products. There are a lot of factors that go into deciding which sort of information you should be getting from users, but it pretty much boils down to a question of “what do you want to learn.”

Today, I’m going to explore one of the many ways you can go about looking at this: Hypothesis Generation vs. Hypothesis Validation. Don’t worry, it’s not as complicated as I’ve made it sound.

What is Hypothesis Generation

In a nutshell, hypothesis generation is what helps you come up with new ideas for what you need to change. Sure, you can do this by sitting around in a room and brainstorming new features, but reaching out and learning from your users is a much faster way of getting the right data.

Imagine you were building a product to help people buy shoes online. Hypothesis generation might include things like:

  • Talking to people who buy shoes online to explore what their problems are
  • Talking to people who don’t buy shoes online to understand why
  • Watching people attempt to buy shoes both online and offline in order to understand what their problems really are rather than what they tell you they are
  • Watching people use your product to figure out if you’ve done anything particularly confusing that is keeping them from buying shoes from you

As you can see, you can do hypothesis generation at any point in the development of your product. For example, before you have any product at all, you need to do research to learn about your potential users’ habits and problems. Once you have a product, you need to do hypothesis generation to understand how people are using your product and what problems you’ve caused.

To be clear, the research itself does not generate hypotheses. YOU do that. The goal is not to just go out and have people tell you exactly what they want and then build it. The goal is to gain an understanding of your users or your product to help you think up clever ideas for what to build next.

Good hypothesis generation almost always involves qualitative research. At some point, you need to observe people or talk to people in order to understand them better.

However, you can sometimes use data mining or other metrics analyzation to begin to generate a hypothesis. For example, you might look at your registration flow and notice a severe drop off half way through. This might give you a clue that you have some sort of user problem half way through your registration process that you might want to look into with some qualitative research.

What is Hypothesis Validation

Hypothesis validation is different. In this case, you already have an idea of what is wrong, and you have an idea of how you might possibly fix it. You now have to go out and do some research to figure out if your assumptions and decisions were correct.

For our fictional shoe-buying product, hypothesis validation might look something like:

  • Standard usability testing on a proposed new purchase flow to see if it goes more smoothly than the old one
  • Showing mockups to people in a particular persona group to see if a proposed new feature appeals to that specific group of people
  • A/B testing of changes to see if a new feature improves purchase conversion

Hypothesis validation also almost always involves some sort of tangible thing that is getting tested. That thing could be anything from a wireframe to a prototype to an actual feature, but there’s something that you’re testing and getting concrete data about.

You can use both quantitative and qualitative data to validate a hypothesis, but you have to choose carefully to make sure you’re testing the right thing. In fact, sometimes a combination of the two is most effective. I’ve got some information on choosing the right type of test in my post Qual vs. Quant: When to Listen and When to Measure.

Types of Research

Why is this distinction between generation and validation important? Because figuring out whether you’re generating hypotheses or validating them is necessary for deciding which type of research you want to do.

Want to understand why nobody is registering for your site? Generate some hypotheses with observational testing of new users. Want to see if the mockups for your new registration flow are likely to improve matters? Validate your hypothesis with straight usability testing of a prototype.

These aren’t the only factors that go into determining the type of research necessary for your stage of product development, but they’re an important part of deciding how to learn from your users.

Like the post? Follow me on Twitter!

Wednesday, May 25, 2011

Designers Need to A/B Test Their Designs

The other day, I posted something I strongly believe on Twitter. A few people disagreed. I’d like to address the arguments, and I’d love to hear feedback and counter-arguments in the comments where you have more than 140 characters to tell me I’m wrong.

My original tweet was, “I don't trust designers who don't want their designs a/b tested. They're not interested in knowing if they were wrong.”

Here are some of the real responses that I got on Twitter with my longer form response.

“There’s a difference between A/B testing (public) and internally deciding. Design is also a matter of taste.”

I agree. There is a big difference between A/B testing in public and internally deciding. That’s why I’m such a huge fan of A/B testing. You can debate this stuff for weeks, and often it’s a huge waste of time.

When you’re debating design internally, what you should be asking is “which of these designs will be better for the business and users.” A/B testing tells you conclusively which side is right. Debate over!

Ok, there’s the small exception of short term vs. long term effects, which is addressed later, but in general, it’s more definitive than the opinion of the people in the room.

With regard to the “matter of taste,” that’s both true and false. Sure, different people like different designs. What you’re saying by refusing to A/B test your designs is that your taste as a designer should always trump that of the majority of your users. As long as you like your design, you don’t care whether users agree with you.

If you want your design aesthetic to override that of your users, you should be an artist. I love art. I even, very occasionally, buy some of it.

But I pay for products all the time, and I tend to buy products that I think are well designed, not necessarily ones where the designer thought they were well designed.

“If Apple had done A/B tests for the iPod in 2001 with a user-replaceable battery, that version would’ve likely won—initially.”

Honestly, it still might win. Is taking your iPod to the Apple store when the battery dies really a feature? No! It’s a design tradeoff. They couldn’t create something with the other design elements they wanted that still had a replaceable battery. That’s fine. 


But all other things about the iPod being totally equal, wouldn’t you buy the one where you could replace the battery yourself? I would. The key there is the phrase “totally equal.”

“Seeing far into the future of technology is not something consumers are particularly great at.”

I feel like the guy who made this argument was confusing A/B testing with bad qualitative testing or just asking users what they would like to see in a product.

This isn’t what A/B testing does. A/B testing measures actual user behavior right now. If I make this change, will they give me more money? It has literally nothing to do with asking users to figure out the future of technology.

“A/B testing has value but shouldn't be litmus test for designer or a design”

Really? What should be the litmus test for a designer or a design if not, “does this change or set of changes actually improve the key metrics of my company”?

In the end, isn’t that the litmus test for everybody in a company? Are you contributing to the profitability of the business in some way?

If you have some better way of figuring out if your design changes are actually improving real metrics, I’d love to hear about it. We can make THAT the litmus test for design.

“Data is valuable but must be interpreted. Doesn't "prove" wrongness or rightness. Designer still has judgment.”

I agree with the first sentence. Data certainly must be interpreted. I even agree that certain design changes may hurt certain metrics, and that can be ok if they’re improving other metrics or are shown to improve things in the long run.

But the only way to know if your overall design is actually making things better for your users is by scientifically testing it against a control.

If your overall design changes aren’t improving key metrics, where’s the judgement there? If you release something that is meant to increase the number of signups and it decreases the number of signups, I think that pretty effectively “proves wrongness.”

The great thing about A/B testing is that you know when this happens.

“Is it the designers fault, surely more appropriate to an IA? After all the IA should dictate the feel/flow.”

First off, I don’t work for companies that are big enough to draw a distinction between the two, but I’m sure there’s enough blame to go around.

Secondly, I think that everybody in an organization has the responsibility to improve key metrics. If you think that your work shouldn’t increase revenue, retention, or other numbers you want higher, why should you be employed?

Design of all kinds is important and can have a huge impact on company profitability. That impact can and should be measured. You don’t get a pass just because you’re not changing flow.

“A/B tests are a snapshot of current variables. They don’t embody nor convey a bigger strategy or long-term vision.”

Also, “That’s only an absolute truth you can rely on if you A/B test for the entire lifespan of the product, which defeats the point.”

These are excellent points, and they are a drawback of A/B testing. It’s sometimes tough to tell what the long term effects of a particular design change are going to be from A/B testing. Also, A/B testing doesn’t easily account for design changes that are a part of a larger design strategy.

In other words, sometimes you’re going to make changes that cause problems with your metrics in the short term, because you strongly believe that it’s going to improve things long term.

However, I believe that you address this by recognizing the potential for problems and designing a better test, not by refusing to A/B test at all.

Just because this particular tool isn’t perfect doesn’t mean we get to fall back on “trust the designers implicitly and never make them check their work.” That doesn’t work out so well sometimes either.

An Argument I Didn’t Hear

There’s one really good argument that I didn’t get, although some of the above tweets touched on it. Sometimes changes that individually test well don’t test well as a whole.

This is a really serious problem with A/B testing because you can wind up with Frankenstein-style interfaces. Each individual decision wins, but the combination is a giant mess.

Again, you don’t address this by not A/B testing. You address it by designing better tests and making sure that all of your combined decisions are still improving things.

How I Really Feel

Look, if I’m hiring for a company that wants to make money (and most of them do), I want my designers to understand how their changes actually affect my bottom line.

No matter how great a designer thinks his or her design is, if it hurts my revenue and retention or other key metrics, it’s a bad design for my company and my users.

Saying you’re against having your designs A/B tested sounds like you’re saying that you just don’t care whether what you’re changing works for users and the company. As a designer, you’re welcome to do that, but I’m not going to work with you.

Like the post? Follow me on Twitter!

Tuesday, May 24, 2011

5 Fun Ways to Ruin Your Startup

So, you’re interested in ruining your startup. At least, that’s what it seems like based on a lot of decisions I see some companies making.

Let’s talk about some of those terrible decisions that really hurt startups.

Hire Big Thinkers

Here’s the thing about Big Thinkers or people who describe themselves as Big Picture People. They don’t execute. At least, they don’t execute in any way that is helpful to a startup.

Sure, there are a few people who can both lead and get their hands dirty with details. If you find one of those people, hire them immediately.

But more often, I see startups stall out because they’ve got somebody making decisions who doesn’t have to actually implement any of those decisions. They’re delegators. And the problem is, at very early stage startups, there just aren’t enough people to delegate TO.

If you’ve got a team of four or five people (or even ten or fifteen), every person should be spending the majority of his or her time actually building, making, designing, writing, testing, selling, or some other verb that isn’t “setting direction” or “planning” or “establishing policies.”

Want a successful startup? Hire Big Doers, not Big Thinkers.

Talk About Awesome Features All The Time

Yes, yes. You have this fantastic idea for the next big pivot that’s going to make you all rich. But you know what? That idea that you had 2 months ago that you still haven’t finished building was also fantastic. So is the one you’ll have 2 months from now. Also the one you’ll have 2 minutes from now.

Startup people are incredibly rich in ideas. Unfortunately, they tend to be broke in every other conceivable resource.

A great way to ruin your startup is to spend all of your time in meetings discussing in detail all the wonderful features you’re going to add in the future. Instead, capture the broad outlines of the idea quickly, put them in your backlog, and, when you’ve actually built something and need to move on to something new, see what ideas you’ve collected that would solve a real customer need. THEN design and build them.

Want a successful startup? Sure, you need to dedicate a little bit of time to thinking about the future, but spend a hell of a lot more time working on the present.

Wait To Ship Until It’s Perfect

It can be tough to release something into the wild before you think it’s perfect. But the thing is, it’s never going to be perfect, and the faster you get it out there, the faster you’re going to start learning which parts are the least perfect.

The longer you put off getting something in front of users, the more money you’re going to spend on something that might very well fail. Wouldn’t it be better to find that out early enough to turn it around and make it awesome?

Want a successful startup? Release small pieces of your product often, and get over worrying that it’s ugly or doesn’t work exactly the way you want it to. You’re just going to end up changing it all anyway.

Work 40 Hours a Week

This one may not be what you expect. It’s not some diatribe about how startup employees need to work 24/7 and not have outside lives and eat all their meals at their desks. If that works for you, great. Personally, I enjoy going outside.

But you do need to acknowledge that work at a startup doesn’t follow a strict 9-5 routine. Sometimes you need to check on things over the weekend or answer customer complaints late at night. Sometimes you need to make a final push to get something out the door quickly. Sometimes decisions need to be made outside of regular business hours, and there isn’t anybody else to make them.

Want a successful startup? You don’t need to live at the office, but you do need to be aware of what’s happening and be able to react when necessary. If you want to turn your phone off at 5pm on Fridays, you might consider working someplace where you’ve got more people to back you up. 

Make A Lot of PowerPoint Decks

Sure, investors love them, and you’ve always got to show something to your board, but I’ve seen this get really out of hand. If you’re spending an hour or two a week building slides to share information with five other people, you are wasting everyone’s time.

I get that there’s important information that you need to share with the team, but the problem with PowerPoint is that people start doing things like tweaking display and finding funny pictures to make their points. A whiteboard works just as well for writing a few bullets, and it’ll get you out of meetings faster, not to mention taking far less prep time. 


Want a successful startup? Consider creating a simple dashboard of all the metrics that everybody in the company should be monitoring so that they can see the pertinent information at any time. That way, nobody’s waiting on you to build graphs and paste them into a deck once a week.

Like the post? Follow me on Twitter!

Friday, April 15, 2011

User Research You Should Be Doing (but probably aren't)

Startups know they should get out of the building and talk to their customers, but sometimes they’re a little too literal about it. There are tons of ways to get great information from your customers. The trick is knowing which technique answers the questions you have right now.

Sure, you’re doing usability tests and trying to have customer development interviews, but here are a few slightly unusual qualitative user research techniques you should be employing but probably aren’t.

Competitor Usability Testing

Have you ever considered running a user test on a competitor’s site?

This one’s fun because it feels a little sneaky. It also gets you a tremendous amount of great information, since chances are somebody is already making mistakes that you don’t have to make.

For example, when one of my clients, Crave, wanted to build a marketplace for buying and selling collectibles, we spent time watching people using other shopping and selling sites. We learned what people loved and hated about the products they were already using, so we could create a product that incorporated only the good bits.

The result was a buying and selling experience that users preferred to several big name shopping sites that will remain nameless.

Bonus tip: There’s always the temptation to borrow ideas from a big competitor with the excuse, “well, so and so is doing it, and they’re successful, so it must be right!” Guess what? Sometimes other companies are successful for a lot of reasons other than that thing you’re stealing from them. Make sure users like that part of a competitor's product before using it in your own.

Tuesday, April 5, 2011

Creating a Great Design and Research Culture

I led a conversation recently at Web 2.0 Expo about creating a great design and research culture at your startup. To be clear, I didn’t offer to run it because I’m an expert, but it’s a topic I’m extremely interested in. I wanted to find out from other people what their problems have been and see if we could help each other solve those problems.

The most interesting thing to me was how similar many of the problems were, which leads me to hypothesize that too many companies are making the same mistakes over and over when trying to integrate design and research into their organizations.

Here are a few of the common complaints I heard and some of the solutions that were proposed.

Keeping Design in a Silo

The most common problem was bad communication between the design team and other teams within the company. One participant said that, in her company, the visual designers were on another floor from the UX designers, and the designs didn’t always translate correctly.

Another participant talked about a company where the engineers, designers, and strategy people were all in different countries. The cultural differences between the different teams led to even more communication problems.

Solution: Our proposed solution to this problem was to blend teams whenever possible. A participant told us that, when they embedded designers with the engineers all sorts of good things happened. Not only did communication improve because they were all sitting together, but they actually became friends, which made them all more willing to listen to different points of view.

Tuesday, March 15, 2011

What Makes UX Lean - My Talk from SXSW

If you couldn’t make it to SXSW this year, there was a fantastic, all day lean startup track with talks from lots of lean startup experts. I was lucky enough to be asked to be on the Lean UX panel, along with the always awesome Janice Fraser, Ian McFarland, and Dan Martell.

I gave a short talk on what makes Lean UX Lean. Since I’m a blogger at heart, I wrote down pretty much everything I was going to say first, which means I can now publish a draft of the talk here! If you didn’t get to hear the panel, or if you did and want a quick refresher, please enjoy!


I’ve been a user experience designer for a lot of years, and I’ve worked with a lot of lean startups, which is part of the reason why I got a call last year from Manuel Rosso, the CEO of Food on the Table.

Now, Food on the Table is a very lean startup here in Austin. Because they’re a lean startup, they measure absolutely everything. And because they measure everything, Manuel knew immediately when the product developed an activation problem.

The whole project has been written up in a post for Eric’s Startup Lessons Learned blog, and I strongly recommend that you go read it if you haven’t already. It has a lot of tips about how to incorporate design into your startup that you’ll hopefully find helpful.

But today, I want to go a little deeper into what made that project a good example of Lean UX. Because, during that project, we did a lot of things that you might do in any sort of a UX project for any sort of a company.

For example, we did qualitative user research to understand why users were having a problem. We made sketches and built interactive prototypes, and we tested and iterated on them.

These are wonderful, helpful things to do, but they’re not unique to Lean User Experience Design. They’re part of User Centered Design, which I’m a huge fan of, and I’ve done all of those things in waterfall projects at giant companies that were anything but lean.

So, what are a few things that made this a lean ux project and not just a regular old redesign?

Integrating Quantitative Research

I think the first hallmark of Lean UX is using quantitative metrics to both drive and validate design changes. What does that mean? Well, it means that the reason we were working on the first time user experience was because a specific metric, activation, wasn’t as high as the team wanted it to be.

Quantitative metrics didn’t tell us exactly why we had a problem - we needed to do our qualitative research to understand that - but it did tell us what our most immediate problem was, which helped us to understand where we should start improving our user experience.

In that way, the metric drove the product decision.

Quantitative metrics also meant that we knew, at the end of the project, we’d be validating our work with an a/b test against the original design. That quantitative validation of design really helps improve the design process over the long run, because we can see what sorts of changes have the biggest positive impact on our end user experience. That lets us improve the ROI on future design projects.

Thursday, March 10, 2011

When Is a Design Done?

I was talking with a designer about Lean UX. I was explaining that one of the hallmarks of Lean UX is to get a good, but not complete version of a product or feature designed and built and then iterate on it later. She thought this sounded like an interesting approach, but then she asked, “When do you know you’re done?”

Figuring out when you’re “done” is tricky for any design or redesign project, unless you’re a consulting agency, of course, in which case the answer is, “when the client runs out of money.” But I realized that, in Lean UX, figuring out when you’re done is actually incredibly easy.

You’re done when your metrics tell you you’re done.

Let me explain. No product is ever actually “done.” There is always something you could do to improve it. However, projects can certainly be done. The trick is that you have to choose your projects correctly.

What’s the correct way to choose a Lean UX project? Every Lean UX project should be chosen based on a metric.

This may piss off a lot of designers who want to make wonderful, exciting, super cool designs just for the sake of design or user happiness, but when it comes down to it, unless you’re independently wealthy, every design change you make should move a number that is important to your business.

Now, it is a lucky break for those of us who care deeply about our users that improving the overall user experience of the product frequently improves some number that the business people care about. But not every single thing you can do to make a user happy has the same ROI for the business. And not every improvement makes the right people happy at the right time.

That’s why the UX projects you choose should be based on metrics.

Let me give you an example. Whoever it is at your startup who is in charge of running the business should have a pretty good idea of what your various metrics have to be in order for you to all retire and buy yachts. For example, your Activation number may have to be 20% and your Retention may have to be 70%. (Please note, I made these numbers up. Your metrics may vary.)

They pick these numbers because they know that having, for example, a 99% retention rate and a 1% activation rate may lead to retaining 3 incredibly happy users forever, which is suboptimal from a business perspective.

So, if your activation number is at 10%, your business folks may come to you and say, “we need to turn more of our acquired traffic into regular users because we have identified this as the most important problem to solve at this moment.” You respond, “Great! How many more do you need?” They explain that you need to get activation from 10% to 20%.

You will notice that the metrics are not driving your design decisions. Nor are they driving your feature requirements or any other product changes. They are simply telling you what your biggest business problem currently is.

Now, it’s up to you as a designer or product owner to figure out what is keeping the activation number low and then come up with some ideas of how to fix it. You do this with what I like to call “research and design” or alternately “that thing you are paid to do.”

You may have dozens of wonderful ideas for how to fix the problem, and you may love and believe in all of them. You may not, however, actually execute every single one of them.

This is where the Lean part comes in.

Ideally, you will design and execute as many of the fixes as necessary in order to move the number to where you want it to be. Maybe you’re awesome (or awesomely lucky), and you move that activation number on the first try with a very small bug fix.

Does that mean you never get to implement the super sweet, but somewhat complicated, feature that you know will make users incredibly happy and improve activation even more? No! Unfortunately, you may not get to implement it just yet.

You see, once you got your activation number to where it needed to be, it stopped being the most important problem to solve. Now, maybe you need to work on getting retention higher or improving revenue or referral.

On the flip side, maybe you redesign the first time user flow and improve activation, but not by enough. That means you should continue working on it. Figure out why your changes didn’t have as big of an impact as you thought they would, and then try some new things.

You’re not “finished” until your metrics are where you want them to be.

Why is this important? Startups have a ridiculous number of things to do, and they typically have limited resources. It can be incredibly difficult to prioritize when to keep working on a feature or an area of the product, and when to move on.

By setting the goals ahead of time based on metrics that are critical to the business, it becomes much easier to know when you’re “done,” and when you should keep optimizing or redesigning.

Like the post? Follow me on Twitter!

Like Lean UX but hate reading? I'll be on the UX panel at the Lean Startup track at SXSW. You should come see it and then say hi to me afterward.

Monday, February 28, 2011

Qual vs. Quant: When to Listen and When to Measure

I have written about qualitative vs quantitative research before, but I still get a lot of questions about it. To answer some of those questions, I want to do a bit of a deeper dive here and give a few examples to help startups answer the key question.

To be clear, that key question is “when should I use qualitative research, and when should I use quantitative research for the best results?” Another way of looking at this is, “when should I be listening to users, and when should I just be shipping code and looking at the metrics?”

The real answer is that you should do both constantly, but there are times when one is significantly more helpful than the other.

I will continue to repeat my cardinal rule: Quantitative research tells you WHAT your problem is. Qualitative research tells you WHY you have that problem.

Now, let’s look at what that actually means to you when you’re making product decisions.

A One Variable Change

When you’re trying to decide between qualitative and quantitative testing for any given change or feature, you need to figure out how many variables you’re changing.

Here’s a simple example: You have a product page with a buy button on it. You want to see if the buy button performs better if it’s higher on the page without really changing anything else. Which do you do? Qualitative of quantitative?

That’s right, I said this one was simple. There’s absolutely no reason to qualitatively test this before shipping it. Just get this in front of users and measure their actual rate of clicking on the button.

The fact is, with a change this small, users in a testing session or discussion aren’t going to be able to give you any decent information. Hell, they probably won’t even notice the difference. Qualitative feedback here is not going to be worth the time and money it takes to set up interviews, talk to users, and analyze the data.

More importantly, since you are only changing one variable, if user behavior changes, you already have a really good idea WHY it changed. It changed because the CTA button was in a better place. There’s nothing mysterious going on here.

There’s an exception! In a few cases, you are going to ship a change that seems incredibly simple, and you are going to see an enormous and surprising change in your metrics (either positive or negative). If this happens, it’s worth running some observational tests with something like UserTesting.com where you just watch people using the feature both before and after the change to see if anything weird is happening. For example, you may have introduced a bug, or you may have made it so that the button is no longer visible to certain users.

Thursday, February 24, 2011

What Does Your User Know?

You and your customer have very different sets of information. This shouldn’t come as a surprise to you, since by now it should be obvious that you are not your user.

Sometimes it can be very hard to distinguish what you know about your product or industry from what your user knows. But it’s important! Making assumptions about your user’s knowledge can lead to products that are impossible for normal people to use.

You Know...

The Details of How Your Product Works

You know all the technical and implementation details of your product. Your user doesn’t even know what those things mean.

One company I worked with had a feature that allowed users to mark off all the items they owned from a list. The engineers knew that, when a user made a selection, that selection was sent to the server in an AJAX request, so the account was kept constantly up to date.

The users didn’t know this. During testing, several users went through, marked off all their selections, and then searched in vain for a Save button. They assumed that they would have to save their work, since they had no idea about the AJAX request that was happening in the background.

The solutions to this were either to allow the users to explicitly save with a button or to give them a tiny amount of feedback while the item was being saved via a very brief wait spinner.

Once we implemented the latter solution, users immediately understood that each click was actually saving the item automatically, and they no longer looked for that Save button.

The take away: Don’t make any assumptions that your users understand what you’re doing for them unless you’re explicitly telling them in some way.

Every Single Feature and Its Purpose

You designed and built every feature in your product, so you know exactly what each of them does and where to find it in your product. Your user knows only what she finds during her time using your product.

One company I worked with had a very useful feature. When I interviewed users about new features they’d like to see, many of them requested the feature, even though it was already in the product! They simply had no idea that it even existed.

The take away: It’s not enough to create fabulous new features. You have to make sure that your features are discoverable by normal users.

Specialized Knowledge About the Product Space

Sometimes we build products to help people do hard things more easily.

Tax preparation software is a fantastic example of this. It is safe to say that most of us who use tax preparation software do not know nearly as much about tax preparation as the people who are building the software. At least, I hope they know more than I do!

The problem arises when we lose touch with exactly which parts of the space the users understand well. We can start to use jargon or terminology that makes perfect sense to us because we hear it all the time. We can design processes that seem completely reasonable if the user already knows the goal.

Unfortunately, this creates complicated, confusing products that assume a much higher level of understanding than the user has.

The take away: When you’re trying to help users accomplish a complicated goal, you need to work even harder to keep the interface simple.

Of course, your customer knows some stuff you don’t know...

How She’s Used to Doing Things

If you’re creating a product that is meant to help a users do something they already do (again, tax preparation or any business software), your goal is to create a generic experience that will satisfy as many people in your core demographic as possible.

But remember, each of those users already does things slightly differently. For example, if you’re helping people who sell things on eBay, you have to understand that each seller already has a process that she follows - from pricing to listing to shipping to dealing with customers.

Asking your users to change too many of their behaviors in order to use your product creates a huge barrier to acceptance.

If you only talk to a few potential customers (or, even worse, none at all), you run the risk of creating something that isn’t broadly usable by all sorts of different users.

The take away: Understanding the variations in user behavior will help you deliver something that is usable by a larger segment of your user base.

Monday, January 24, 2011

Two Stupid Reasons for Complicated Products

I frequently get asked by startups to simplify products. In general, companies are fantastic at coming up with great feature ideas, but they tend to find it harder to either kill underperforming features or properly integrate new ideas that got tacked on as an experiment or pivot.

Because I get called in when a company already has a product that new users find confusing, I see a lot of the same mistakes repeated. I also hear the same excuses for those mistakes.

Often, when I’m looking at a new product, I’ll find very similar features in different parts of the interface.

For example, one social product had three completely different ways of searching for friends the user might know. Now, I don’t mean that there were different criteria you could use - like email address or interests or user name. That would have been fine.

I mean there were three completely different places the user could go in the product to find three completely different features that were meant to help people search for their friends. There was huge functionality overlap among the three features, but they were all slightly different.

There’s a Reason For That


Of course, I mentioned that it seemed odd and confusing to have three different places to go to do essentially the same thing. And the product owner patiently explained to me that there was a reason for that.

The product owner then launched into a detailed description of how the first one had been built by the team as an experiment. A few months later, since the experiment went well, but was a little slow, the team migrated to a different technology for search, and built a second version of the feature alongside the first one to see if it could be faster.

Since the product owner hadn’t given them any requirements for the new version other than “go faster,” and the new technology made some of the old functionality tricky, the second version didn’t have exactly the same capabilities as the first. So, the team decided it wasn’t an adequate replacement for the old version. They released it anyway, since they’d already built it, and it was faster.

The third version of the feature had been built as part of a larger feature, but the rest of that larger feature had been killed, and only the search part remained. It had some neat new functionality that users liked, but the team felt it didn’t really replace either of the other two versions.

Moreover, the product owner explained to me that different types of users liked the various different versions of friend search, so he was hesitant to kill any of them, because whichever one he killed would upset somebody.

So, he was stuck supporting three different variations of the same feature, and new users were overwhelmed by choice for where they were supposed to go to find their friends.

And here’s the thing: users don’t actually want this sort of choice. This sort of choice is confusing. It makes navigating the product cumbersome, and it’s unpleasantly surprising to constantly find new, slightly different ways to do the same thing.

There’s a Big Difference


This isn’t the only way that the problem manifests. Sometimes when I point out similar features in different places, the product owner reassures me that “there is a big difference” between the features.

This happened with a client when I pointed out that there were two different types of quizzes in one section of the product, but they had different names and placements. I asked why they couldn’t be combined into one section, so that the clutter on the page would be reduced and the user could always know where to go to take quizzes.

He assured me that there was a big difference between the two types of quizzes. When I asked him to elaborate, he went into detail about how the types of quizzes were different on the back end, and how one was often used as a business development tool while the other was user generated.

In other words, both of the “big differences” were things that were only different to the company. They were the sorts of differences that a user would never notice. All a user would notice would be that the quizzes were sometimes in one place and sometimes in another, which would be confusing and frustrating if she was looking for that feature.

How to Avoid This


Take a hard look at your product. Are there any similar features? You’ll be surprised at how often the answer is yes. If there are, ask yourself what your reasons were for building the different variations.

If your reasons for building different versions of the same feature are based entirely on technology or business development (in other words, things that only matter to YOU and not your users), you’ve got a problem.

The most customer friendly way of dealing with it is to try to come up with a superset of the best functionality from all the versions and improve one of the versions until it satisfies as many user stories as possible.

But even if you don’t have the time or resources to consolidate them into one great version of the product, just killing the duplicated features will ultimately simplify your product and make it more useful and understandable for all of your customers.

Like the post? Follow me on twitter!

Tuesday, January 18, 2011

Lean UX - A Case Study

For those very, very few (ok, none) of you who read my blog but don't read Eric Ries's blog, Startup Lessons Learned, I have some exciting news for you. But first, why the hell aren't you reading Eric's blog? You really should. It's great.

I've a written a guest post that now appears on the Startup Lessons Learned blog. It's a case study of a UX project I did with the lean startup Food on the Table.

If you're wondering whether design works well with lean startups, I answer that question in the post. Spoiler alert: The answer is 'yes'.

Thursday, January 6, 2011

Testing Whether Your Users Will Buy

As you all know by now, I’m a huge proponent of qualitative user testing. I think it’s wonderful for learning about your users and product.

But it’s not a panacea. The fact is, there are many questions that qualitative testing either doesn’t answer well or for which qualitative testing isn’t the most efficient solution. I cover some of them in my A Faster Horse post.

The trick is knowing which questions you can answer by listening to your users and which questions need a different methodology.

Unfortunately, one of the most important questions people want answered isn’t particularly well suited to qualitative testing.

If I Build It, Will They Buy?

I get asked a lot whether users will buy a product if the team adds a specific feature. Sadly, I always have to answer, “I have no idea.”

The problem is, people are terrible at predicting their future behavior. Imagine if somebody were to ask you if you were going to buy car this year. Now, for some of you, that answer is almost certainly yes, and for others it’s almost certainly no. But for most of us, the answer is, “it depends on the circumstances.”

For some, the addition of a new feature - say, an electric motor - might be the deciding factor, but for many the decision to buy a car depends on a lot of factors, most of which aren’t controlled by the car manufacturer: the economy, whether a current car breaks down, whether we win the lottery or land that job at Goldman Sachs, etc. There are other factors that are under the control of the car company but aren't related to the feature: maybe the new electric car is not the right size or isn't in our price range or isn't our style.

This is true for smaller purchases too. Can you absolutely answer whether or not you will eat a cookie this week? Unless you never eat cookies (I'm told these people exist), it’s probably not something you give a lot of thought to. If somebody were to ask you in a user study, your answer would be no better than a guess and would possibly even be biased by the simple act of having the question asked.

Admit it, a cookie sounds kind of good right now, doesn’t it?

There are other reasons why qualitative testing isn't great at predicting future behavior, but I'm not going to bore you with them. The fact is, it's just not the most efficient or effective method for answering the question, "If I build it, will they come?"

What Questions Can Qualitative Research Answer Well?

Qualitative research is phenomenal for telling you whether your users can do x. It tells you whether the feature makes sense to them and whether they can complete a given task successfully.