Imagine that I have a product that cures cancer. Sadly, the side effect is that you may lose a few toes. I’ll bet that I would still have a huge line of customers who want to use my product.
Now, instead of curing cancer, imagine that the product tells you where you should eat lunch. Unfortunately, the toe-loss thing still applies. I’m going to go out on a limb and say that I’ll probably have far fewer customers.
This seems obvious. Sacrificing a toe or three doesn’t seem like a big deal when weighed against your life, but it’s a different story when it’s just lunch. Even a really good lunch.
If you are asking your users to put up with a lot of pain, you need to do so in the context of giving them something extraordinary. I get asked all the time how to tell when something is good enough. Does it have enough features? Is the visual design pretty? What if it has a couple of bugs? The answer to all of these questions is that it depends on whether the users are getting enough in return.
Every startup has a slightly different calculus for deciding what product to put out into the world, but I’m going to give you a piece of advice that will make this all a little easier: if you’re solving a really big problem that nobody else is solving, your early adopters will be quite tolerant.
This is one of the reasons why B2B applications often get away with being so awful and hard to use. If a product helps me do my job better and makes me more money, it’s solving a big problem for me. I’ll put up with a few missing features or a less than stellar experience. (There are lots of other reasons B2B applications are terrible, of course, but that’s not what this blog post is about.)
Of course, there is a minimum standard for anything you put out in the world. People have to understand what it does, for example, and be able to use it to solve their really serious problem. In other words, it needs to be both usable and useful. But the more useful it is, the more of a pass you get on a lot of the nice-to-haves.
To be clear, this is not a pass to make your product awful. Think of this as an encouragement to build something important that solves serious problems for people and to get it into their hands as quickly as possible.
Like the post? Follow me on Twitter!
Like the post but wish there were more of it? Buy the book!
Pages
▼
Friday, October 25, 2013
Monday, October 14, 2013
Stop Making Users Explore
Often, entrepreneurs ask me something to the effect of, “What’s the best way to let new users explore my product?”
My answer is almost always a variation of, “Stop it.” In order to be slightly more helpful, let’s look at why this is a terrible question.
Think about the last time you bought a drill. Did you sit down with the drill in order to spend time exploring it? Not unless you’re some sort of drill fetishist. What you almost certainly did was try to figure out the fastest way that you could set about completing the project for which you had bought the drill.
The same is true of whatever product you’re building. I know that you care deeply about the user interface of your product and all of the delightful features you have so lovingly handcrafted. Sadly, nobody else does. At least, not in the same way that you do.
People want whatever your product promises to do for them, and they want it to happen as quickly and easily as possible. They don’t want to explore your tax preparation software. They want their taxes done. They don’t want to delve deeply into the mysteries of your To Do List software. They want to not miss deadlines.
Nonsense.
All of those incredibly complicated, feature-dense pieces of B2B software that require weeks of training are getting disrupted by things that humans actually understand. I worked with a company that required all documents be shared by filing a ticket with IT to create a personal folder on a shared server which then required mounting a new drive onto the desktop and...blah blah blah. Everybody just used Dropbox, even though it was officially forbidden by the company.
The fact is, people in big companies are forced to work with dozens of complicated products every single day. The introduction of a new, complicated product does not instill in them the desire to spend a lot of their day exploring it. It tends to make them sigh resignedly and figure out if there is some way to avoid learning the new system until it goes away and is replaced by something else.
The only way to make a product that people at work want to use is to make a product that is so obvious and easy to operate that they don’t feel like they have to explore it. They can just jump in, share a document, send an email, or do whatever task it is that they wanted to do originally. They shouldn’t have to explore anything to do their jobs.
I mean, sure, you can wander all over GTAV and steal as many cars as you want. But have you ever noticed how many quests and tasks and hints you’re given along the way as a new player? Actually, you probably haven’t. Really successful games are fabulous at getting you onboarded without making you feel like you’re going through a tedious training session but also without just dumping you directly into the deep end.
In fact, in good games, the real exploration doesn’t come until users are pretty comfortable with all the basic actions they need to be successful. Often, advanced features are hidden from users until they are unlocked. This not only provides the user with an incentive to keep playing, but it effectively hides complexity until the user is ready for it.
Think about hiding a rocket launcher from a new FPS player. Now think about hiding quarterly estimates from a tax preparer until you know that she needs to file quarterly estimates. There’s a surprising similarity. Note: hiding rocket launchers from people doing their taxes is also not a terrible idea.
When you’re selling widgets, it’s all about showing off the widgets as quickly as possible. Even while you’re looking at a widget, the site or app is immediately offering you more widgets that you might be interested in.
It’s not about exploration of the product itself. It’s about getting you involved with the things the product is selling.
When a new user comes to your product, give them a task. Have them do the most obvious, low-friction thing that they will need to do in order to become a slightly more experienced user of the product.
Twitter is an excellent example. When you first join, they don’t just tell you to explore Twitter. They have you immediately start following people. This not only introduces you to the concept of following people, but it gives you a nice, low-friction way to start using the product in the manner it’s meant to be used.
Of course, figuring out what that most obvious first task is can be tricky. In order to do it well, you need to truly understand why your user might want to use your product. What problem are they trying to solve? What task do they want to accomplish? How do they want to change their lives? What sort of hole are they trying to drill?
Once you understand that, you’ll know how to create an onboarding experience that won’t force people to explore your product before using it. In fact, they’ll never have to explore it. They’ll just be able to accomplish their task and get on with their now-improved lives. And that, after all, is exactly why they wanted to use your product in the first place.
Like the post? Follow me on Twitter!
My answer is almost always a variation of, “Stop it.” In order to be slightly more helpful, let’s look at why this is a terrible question.
Users Don’t Care About Exploring Your Product
Nobody cares about your product. Fundamentally, what users care about is themselves. They are using your product as a means to an end. We knew this back in 1960 when Theodore Levitt explained that when customers buy quarter inch drills, they really are buying quarter inch holes.Think about the last time you bought a drill. Did you sit down with the drill in order to spend time exploring it? Not unless you’re some sort of drill fetishist. What you almost certainly did was try to figure out the fastest way that you could set about completing the project for which you had bought the drill.
The same is true of whatever product you’re building. I know that you care deeply about the user interface of your product and all of the delightful features you have so lovingly handcrafted. Sadly, nobody else does. At least, not in the same way that you do.
People want whatever your product promises to do for them, and they want it to happen as quickly and easily as possible. They don’t want to explore your tax preparation software. They want their taxes done. They don’t want to delve deeply into the mysteries of your To Do List software. They want to not miss deadlines.
But What About B2B Products?
I know, I know. B2B products are different! They’re more complex! They have so many features! They require training and exploration!Nonsense.
All of those incredibly complicated, feature-dense pieces of B2B software that require weeks of training are getting disrupted by things that humans actually understand. I worked with a company that required all documents be shared by filing a ticket with IT to create a personal folder on a shared server which then required mounting a new drive onto the desktop and...blah blah blah. Everybody just used Dropbox, even though it was officially forbidden by the company.
The fact is, people in big companies are forced to work with dozens of complicated products every single day. The introduction of a new, complicated product does not instill in them the desire to spend a lot of their day exploring it. It tends to make them sigh resignedly and figure out if there is some way to avoid learning the new system until it goes away and is replaced by something else.
The only way to make a product that people at work want to use is to make a product that is so obvious and easy to operate that they don’t feel like they have to explore it. They can just jump in, share a document, send an email, or do whatever task it is that they wanted to do originally. They shouldn’t have to explore anything to do their jobs.
But...but...but...Games!
Nope. Sorry. Still very little open exploration for new users.I mean, sure, you can wander all over GTAV and steal as many cars as you want. But have you ever noticed how many quests and tasks and hints you’re given along the way as a new player? Actually, you probably haven’t. Really successful games are fabulous at getting you onboarded without making you feel like you’re going through a tedious training session but also without just dumping you directly into the deep end.
In fact, in good games, the real exploration doesn’t come until users are pretty comfortable with all the basic actions they need to be successful. Often, advanced features are hidden from users until they are unlocked. This not only provides the user with an incentive to keep playing, but it effectively hides complexity until the user is ready for it.
Think about hiding a rocket launcher from a new FPS player. Now think about hiding quarterly estimates from a tax preparer until you know that she needs to file quarterly estimates. There’s a surprising similarity. Note: hiding rocket launchers from people doing their taxes is also not a terrible idea.
E-Commerce?
Again, not really. While online stores do encourage you to explore and browse, you’ll notice that they don’t have you exploring and browsing the store itself. They have you exploring and browsing the products they want you to buy.When you’re selling widgets, it’s all about showing off the widgets as quickly as possible. Even while you’re looking at a widget, the site or app is immediately offering you more widgets that you might be interested in.
It’s not about exploration of the product itself. It’s about getting you involved with the things the product is selling.
What Should You Do Instead?
Stop thinking about letting users explore your product. In fact, stop thinking about letting them do anything at all.When a new user comes to your product, give them a task. Have them do the most obvious, low-friction thing that they will need to do in order to become a slightly more experienced user of the product.
Twitter is an excellent example. When you first join, they don’t just tell you to explore Twitter. They have you immediately start following people. This not only introduces you to the concept of following people, but it gives you a nice, low-friction way to start using the product in the manner it’s meant to be used.
Of course, figuring out what that most obvious first task is can be tricky. In order to do it well, you need to truly understand why your user might want to use your product. What problem are they trying to solve? What task do they want to accomplish? How do they want to change their lives? What sort of hole are they trying to drill?
Once you understand that, you’ll know how to create an onboarding experience that won’t force people to explore your product before using it. In fact, they’ll never have to explore it. They’ll just be able to accomplish their task and get on with their now-improved lives. And that, after all, is exactly why they wanted to use your product in the first place.
Like the post? Follow me on Twitter!
Want more advice like this?
How about buying the book? It will help you learn how to build great products. I promise.
Wednesday, September 4, 2013
The Best Way(s) to Learn Lean User Research
I've been excited to see more and more people getting interested in user research and customer development over the past few years. It's not a new field by any means, but it's new to a lot of entrepreneurs and founders.
Of course, what that means is that when I talk about research, I hear a lot of the same questions over and over again. I hear questions about recruiting the right users, the right number of people to talk to, and what questions to ask. I also hear a lot of confusion about how to choose the right type of research and when to use qualitative versus quantitative methods.
Now, there is a lot of great information out there about how to do research. There are blogs and books and classes. But often these are more than entrepreneurs really need. They don't want to become user researchers. They want to learn exactly the techniques that they need to do whatever they need to do right now.
The first third of my book, UX for Lean Startups, is aimed at getting people comfortable with the idea of validating hypotheses and figuring out what sort of research to do. But I've found that often people need a little more help. They need specific guides for running each different type of study.
So, that's what I'm working on now, and I hope to have some guides available in the next couple of months. These guides will be fairly detailed how-tos for things like running a usability test, recruiting users, conducting observational testing, and other topics that I get asked about constantly.
If you would like to sign up to be the first to hear when these guides are available for purchase, go here and sign up.
If you would like to tell me what guide you'd most like to see or what question you'd most like answered, send me email at laura@usersknow.com.
If you'd like something to read in the meantime, did I mention I'd written a book?
If you don't like all this reading and would prefer to learn in workshop format, I will be doing some video workshops for LUXr. You should sign up here.
And if you still have questions, you can reach me on Clarity for a quick call or sometimes hire me to consult, depending on my availability.
You know what this means, right? It means that pretty soon you will have absolutely no excuse for not learning from your users.
Of course, what that means is that when I talk about research, I hear a lot of the same questions over and over again. I hear questions about recruiting the right users, the right number of people to talk to, and what questions to ask. I also hear a lot of confusion about how to choose the right type of research and when to use qualitative versus quantitative methods.
Now, there is a lot of great information out there about how to do research. There are blogs and books and classes. But often these are more than entrepreneurs really need. They don't want to become user researchers. They want to learn exactly the techniques that they need to do whatever they need to do right now.
The first third of my book, UX for Lean Startups, is aimed at getting people comfortable with the idea of validating hypotheses and figuring out what sort of research to do. But I've found that often people need a little more help. They need specific guides for running each different type of study.
So, that's what I'm working on now, and I hope to have some guides available in the next couple of months. These guides will be fairly detailed how-tos for things like running a usability test, recruiting users, conducting observational testing, and other topics that I get asked about constantly.
If you would like to sign up to be the first to hear when these guides are available for purchase, go here and sign up.
If you would like to tell me what guide you'd most like to see or what question you'd most like answered, send me email at laura@usersknow.com.
If you'd like something to read in the meantime, did I mention I'd written a book?
If you don't like all this reading and would prefer to learn in workshop format, I will be doing some video workshops for LUXr. You should sign up here.
And if you still have questions, you can reach me on Clarity for a quick call or sometimes hire me to consult, depending on my availability.
You know what this means, right? It means that pretty soon you will have absolutely no excuse for not learning from your users.
Tuesday, September 3, 2013
Don't Allow Behaviors. Encourage Them!
I wrote this post for the O'Reilly Programming Blog. Here's an excerpt:
As a consultant, I’ve talked to a lot of startups who have “social” products. You could tell that the products were “social” because they had comment sections and sharing icons that let people post to Pinterest or Facebook.
As a consultant, I’ve talked to a lot of startups who have “social” products. You could tell that the products were “social” because they had comment sections and sharing icons that let people post to Pinterest or Facebook.
Of course, one of the things that the founders complain about is that too few users are actually making comments or sharing or doing anything remotely social with the product.
There’s a very simple reason for this: the founders have added features to their product that allow users to be social rather than encouraging them to be social.
Read More at O'Reilly >
Read More at O'Reilly >
Monday, August 5, 2013
Maybe You're Just Delusional
I tell people to listen to their customers a lot. It’s kind of my thing. Every so often when I’m explaining how to learn about customer problems and incorporate that feedback into a product, I run into a founder who is truly resistant.
“But...my VISION!” they cry. Then they go on to build exactly the product that they want to build without getting feedback from users. And once in awhile this works out, I’m told. But typically I never hear about them, or their products, again.
The sad thing is that vision and customer feedback don’t have to be at odds.
I’m going to give you two different visions that a startup founder might have, and I’d like you to try to spot the differences between the two.
Vision #1
“Pet owners are upset about how much their pets cost. This product is going to make it more affordable to have a pet by getting jobs for the pets so that the pets are bringing in money! It’s called Jobs4Pets, and people will be able to post jobs for dogs, cats, rabbits, whatever. And other people will find jobs for their pets and apply right on the site. We’ll make money by charging a service fee on each of the transactions! Obviously, we’re mobile first, and the jobs will be shown in a Pinterest style layout because that’s the best possible layout for things.”
Vision #2
“Some pet owners are upset about how much their pets cost. This product is going to make it more affordable to have a pet.”
See the difference? I mean, besides the fact that the first one is completely delusional?
In the first one, the deranged...I mean visionary...founder has a vision not just for the goal of the company, but for every detail of the actual product. She’s not just envisioning what the product will help people do. She’s envisioning exactly how the product will help people do that, right down to the layout on the home page.
She hasn’t left room to validate the many assumptions she’s making - that pet owners have a problem with costs, that pets can do jobs, that people will post jobs for pets, that people want their pets to have jobs, etc. If any of those assumptions are invalid, by the way, the entire product will fail, and even her lovely, Pinterest-style layout can’t help her.
But the most important thing to note is that the second vision is entirely compatible with user research.
The founder with the second vision might want to go out and meet lots of pet owners in order to find out how big of a problem cost is for them. She might learn the ways that people are already saving money. She might ask which parts of pet ownership cost the most or are the most burdensome. She might test several different solutions for saving pet owners money and see which one gets the most interest or traction. She might even end up with an entirely different product than she originally imagined, all without sacrificing her vision!
So, how can you balance customer feedback with vision? Try to envision how your product is going to change somebody’s life, not how they’re going to perform specific tasks. Envision the problem that you’re solving, not the specific solution.
Then listen to your users. Observe them. Learn from them exactly how you can solve their problem.
That’s the best way to make sure that your vision becomes a reality.
This was written for Startup Edition. The question was, "How do you balance user feedback with your long term vision?"
This was written for Startup Edition. The question was, "How do you balance user feedback with your long term vision?"
Want more information like this?
My new book, UX for Lean Startups, will help you learn how to do better qualitative and quantitative research. It also includes tons of tips and tricks for better, faster design.
Wednesday, June 19, 2013
Mobile First? Not So Fast! The Importance of Flow and Context.
I recently wrote a post for the O'Reilly Programming Blog called "Mobile First? Not So Fast!
Why "flow" and "context" are more important than screen size."
Here's an excerpt:
Are we done with the Mobile First meme, yet? Can we be? Please?
Look, don’t get me wrong. I fundamentally agree with a lot of the thoughts behind the annoying catchphrase “mobile first.” For example, I agree that mobile devices are now the primary (if not only) mode of connecting for many markets. I also think that having some sort of mobile strategy is absolutely required for almost every product.
The problem is that “mobile first” often equates “mobile” with “small screen” or “responsive layout” or “native vs. mobile web.” Now, those are all incredibly important decisions. But if you’re thinking about the size of your screen or the technology you’re going to use first, you are designing wrong.
Of course, if you’ve read anything else I’ve ever written, you know that the first thing you must figure out is an important customer problem or need that your product is aimed at solving for real people. We’re going to just skip over that whole part where you get to know your most important users. But that’s always first. Promise.
Once you’ve done all that though, you need to start designing. And there are two things that you should always know before you even start considering things like screen size or technology.
Those two things are: Flow and Context.
Read the rest at the O'Reilly Programming Blog >
Why "flow" and "context" are more important than screen size."
Here's an excerpt:
Are we done with the Mobile First meme, yet? Can we be? Please?
Look, don’t get me wrong. I fundamentally agree with a lot of the thoughts behind the annoying catchphrase “mobile first.” For example, I agree that mobile devices are now the primary (if not only) mode of connecting for many markets. I also think that having some sort of mobile strategy is absolutely required for almost every product.
The problem is that “mobile first” often equates “mobile” with “small screen” or “responsive layout” or “native vs. mobile web.” Now, those are all incredibly important decisions. But if you’re thinking about the size of your screen or the technology you’re going to use first, you are designing wrong.
Of course, if you’ve read anything else I’ve ever written, you know that the first thing you must figure out is an important customer problem or need that your product is aimed at solving for real people. We’re going to just skip over that whole part where you get to know your most important users. But that’s always first. Promise.
Once you’ve done all that though, you need to start designing. And there are two things that you should always know before you even start considering things like screen size or technology.
Those two things are: Flow and Context.
Read the rest at the O'Reilly Programming Blog >
Want more information like this?
My new book, UX for Lean Startups, will help you learn how to do better qualitative and quantitative research. It also includes tons of tips and tricks for better, faster design.
Wednesday, April 24, 2013
You Can't Make Good Decisions with Bad Data
I think a critical lesson of the Lean Startup movement is that you have to learn quickly.
The “quickly” part of that lesson can lead to a culture of “good enough.” Your features should be good enough to attract some early adopters. Your design should be good enough to be usable. Your code should be good enough to make your product functional.
While this might drive a lot of perfectionists nuts, I’m all for it. Good enough means that you can spend your time perfecting and polishing only the parts of your product that people care about, and that means a much better eventual experience for your users. It may also mean that you stay in business long enough to deliver that experience.
I think though that there’s one part of your product where the standard for “good enough” is a whole lot higher: Data. Data are different.
Imagine for a moment an a/b testing system that randomly returned the wrong test winner 30% of the time. It would be tough to make decisions based on that information, wouldn’t it? How would you know if you were choosing the right experiment branch?
Qualitative research can be just as bad. I can’t tell you how many founders have spent time and money talking to potential customers and then wondered why nobody used their product. Nine times out of ten, they were talking to the wrong people, asking the wrong questions, or using terrible interview techniques.
I had one person tell me, “bad data are better than no data,” but I strongly disagree here. After all, if I know I don’t have any data, I can go do some research and learn something.
But if I have some bad data, I think I already know the answers. Confirmation bias will make it even harder for me to unlearn that bad information. I’m going to stop looking and start acting on that information, and that may influence all of my product decisions.
If I “know” that all of my users are left handed, I can spend an awful lot of time building and throwing out features for left handed people before realizing that what I got wrong was the original premise. And, of course, that problem is made even worse if I’m not getting good information about how the features are actually performing.
One of the best arguments for building minimum viable products and features is that you might just throw them out once you’ve learned something from them (like that nobody wants what you built).
This isn’t true of collecting data. Obviously you may change the way you collect data or the types of data you collect, but you’re going to keep doing it, because there’s simply no other way to make informed decisions.
Because this is something that you know is absolutely vital to your company, it’s worth getting it right early.
You’ve got a new product in a new market, possibly with new technology. You have to do a lot of digging in order to figure out what you should be building. There’s no guide book telling you exactly what features your revolutionary new product should have.
That’s not true of gathering data. There is a ton of useful, pertinent information about the right way to do both qualitative and quantitative research. There are workshops and courses you can take on how to not screw up user interviews. There are coaches you can hire to get you trained in gathering all sorts of data. There are tools you can drop in to help you do a/b testing and funnel tracking. There are blogs you can read written by people who have already made mistakes so that you don’t have to make the same ones. There is a book called Lean Analytics that pretty much lays it out for you.
You don’t have to take advantage of all of these things, but you also don’t have to start from scratch. Taking a little time to learn about the tools and methods already available to you gives you a huge head start.
For example, customer development interviews go much more quickly when you’re asking the right questions of the right people. You don’t have to talk to nearly as many users when you know how to not lead them and to interpret their answers well. Observational and usability research becomes much simpler when you know what you’re looking for.
The same is true for quantitative data collection. Your a/b tests won’t seem nearly so random when you’re sure that the information in the system is correct. You won’t have to spend time as much time figuring out what’s going on with your experiments if you trust your graphs.
If you don’t have all the data, and you know you don’t have all the data, that’s fine. You can always go out and do more research and testing later. You just don’t want to put yourself into the situation where you have to unlearn things later.
You don’t have to have all the answers. You just have to make sure you don’t have any wrong answers. And you do that by setting the bar for “good enough” pretty damn high on your data collection skills.
Like the post? Please share it!
The “quickly” part of that lesson can lead to a culture of “good enough.” Your features should be good enough to attract some early adopters. Your design should be good enough to be usable. Your code should be good enough to make your product functional.
While this might drive a lot of perfectionists nuts, I’m all for it. Good enough means that you can spend your time perfecting and polishing only the parts of your product that people care about, and that means a much better eventual experience for your users. It may also mean that you stay in business long enough to deliver that experience.
I think though that there’s one part of your product where the standard for “good enough” is a whole lot higher: Data. Data are different.
You Can’t Make Good Decisions With Bad Data
The most important reason to do good research is that it can keep you from destroying your startup. I’m not being hyperbolic here. Bad data can ruin your product.Imagine for a moment an a/b testing system that randomly returned the wrong test winner 30% of the time. It would be tough to make decisions based on that information, wouldn’t it? How would you know if you were choosing the right experiment branch?
Qualitative research can be just as bad. I can’t tell you how many founders have spent time and money talking to potential customers and then wondered why nobody used their product. Nine times out of ten, they were talking to the wrong people, asking the wrong questions, or using terrible interview techniques.
I had one person tell me, “bad data are better than no data,” but I strongly disagree here. After all, if I know I don’t have any data, I can go do some research and learn something.
But if I have some bad data, I think I already know the answers. Confirmation bias will make it even harder for me to unlearn that bad information. I’m going to stop looking and start acting on that information, and that may influence all of my product decisions.
If I “know” that all of my users are left handed, I can spend an awful lot of time building and throwing out features for left handed people before realizing that what I got wrong was the original premise. And, of course, that problem is made even worse if I’m not getting good information about how the features are actually performing.
You Have To Keep Doing It
Unlike any given feature or piece of code, collecting data is guaranteed to be part of your process for the life of your startup.One of the best arguments for building minimum viable products and features is that you might just throw them out once you’ve learned something from them (like that nobody wants what you built).
This isn’t true of collecting data. Obviously you may change the way you collect data or the types of data you collect, but you’re going to keep doing it, because there’s simply no other way to make informed decisions.
Because this is something that you know is absolutely vital to your company, it’s worth getting it right early.
Data Collection Is Not a Mystery
Most of your product development is going to be a mystery. That’s the nature of startups.You’ve got a new product in a new market, possibly with new technology. You have to do a lot of digging in order to figure out what you should be building. There’s no guide book telling you exactly what features your revolutionary new product should have.
That’s not true of gathering data. There is a ton of useful, pertinent information about the right way to do both qualitative and quantitative research. There are workshops and courses you can take on how to not screw up user interviews. There are coaches you can hire to get you trained in gathering all sorts of data. There are tools you can drop in to help you do a/b testing and funnel tracking. There are blogs you can read written by people who have already made mistakes so that you don’t have to make the same ones. There is a book called Lean Analytics that pretty much lays it out for you.
You don’t have to take advantage of all of these things, but you also don’t have to start from scratch. Taking a little time to learn about the tools and methods already available to you gives you a huge head start.
Good Data Take Less Time Than Bad Data
Here’s the good news: good data actually take less time to collect than bad data. Sure, you may have to do a little bit of upfront research on the right tools and methods, but once you’ve got those down, you’re going to move a hell of a lot faster.For example, customer development interviews go much more quickly when you’re asking the right questions of the right people. You don’t have to talk to nearly as many users when you know how to not lead them and to interpret their answers well. Observational and usability research becomes much simpler when you know what you’re looking for.
The same is true for quantitative data collection. Your a/b tests won’t seem nearly so random when you’re sure that the information in the system is correct. You won’t have to spend time as much time figuring out what’s going on with your experiments if you trust your graphs.
Good Data Does Not Mean Complete Data
I do want to make one thing perfectly clear: the quest for good data should be more about avoiding bad data than it is about making sure you have every scrap of information available.If you don’t have all the data, and you know you don’t have all the data, that’s fine. You can always go out and do more research and testing later. You just don’t want to put yourself into the situation where you have to unlearn things later.
You don’t have to have all the answers. You just have to make sure you don’t have any wrong answers. And you do that by setting the bar for “good enough” pretty damn high on your data collection skills.
Like the post? Please share it!
Want more information like this?
My new book, UX for Lean Startups, will help you learn how to do better qualitative and quantitative research. It also includes tons of tips and tricks for better, faster design.
Monday, April 22, 2013
The Best Best Practice
I get asked for a lot of what I call "generic" advice, which I'm not really very good at giving. People will ask questions like, "Should I make a prototype?" or "Should I build a landing page?" or "Should I do more customer development?"
If you've asked this in email, you've probably gotten an unreadable 5,000 word manifesto that is essentially a brain dump of everything I can think of on the topic. If you've asked me in person you've almost certainly had to listen to me blather until your eyes glazed over.
Wherever you've asked, I've probably started the response with the words, "Well, it depends..."
And it does depend. What you should do right now with your product depends on a tremendous number of factors.
However, I think I've got some better advice for you.
You see, there aren't really Best Practices in Lean UX that apply in every situation. There are merely things that would be extremely helpful, except in cases where they'd be a huge waste of time. You can learn all the techniques in the world, but you still have to know when to apply them.
Every time you are wondering, "should I do this thing?" you should immediately ask yourself the following three questions:
I asked him what he hoped to learn by building an interactive prototype. He said he wanted to know if people would use the feature. I explained that, actually, interactive prototypes aren't terribly good for figuring out if people will use your new feature. They're only good for figuring out if people can use your new feature.
If you've asked this in email, you've probably gotten an unreadable 5,000 word manifesto that is essentially a brain dump of everything I can think of on the topic. If you've asked me in person you've almost certainly had to listen to me blather until your eyes glazed over.
Wherever you've asked, I've probably started the response with the words, "Well, it depends..."
And it does depend. What you should do right now with your product depends on a tremendous number of factors.
However, I think I've got some better advice for you.
You see, there aren't really Best Practices in Lean UX that apply in every situation. There are merely things that would be extremely helpful, except in cases where they'd be a huge waste of time. You can learn all the techniques in the world, but you still have to know when to apply them.
Every time you are wondering, "should I do this thing?" you should immediately ask yourself the following three questions:
- What do I hope to learn by doing this?
- How likely is it that I will learn what I want to learn by doing this?
- Is there a faster, cheaper, or more effective way that I could learn what I want to learn?
An Example!
Somebody recently asked me if his company should build an interactive prototype of a proposed new feature.
I asked him what he hoped to learn by building an interactive prototype. He said he wanted to know if people would use the feature. I explained that, actually, interactive prototypes aren't terribly good for figuring out if people will use your new feature. They're only good for figuring out if people can use your new feature.
So, by building an interactive prototype, you're very unlikely to learn what you want to learn. A more effective way to learn if people will use a new feature might be a Feature Stub (also called a Fake Door).
Note: A Feature Stub is where you put some sort of access in your product to the proposed feature. For example, if you were wondering if people would watch an informational video, you might put a link on your site called Watch This Informational Video and then record how many people clicked on the link. If nobody clicked your link, you wouldn't bother to make an informational video.
To be clear, it may be that he should also build an interactive prototype in order to figure out if people can use the feature as designed. However, his first step should be to learn whether the feature is worth building at all. If nobody's going to use the feature, it's best to learn that before you spend a lot of time designing and building it.
It's All About Learning
The reason these questions are so important is that Lean Startup is all about learning quickly. If a particular Best Practice helps you learn what you need to learn, then you should use it. If not, you shouldn't. At least, not just yet. In other words, it depends.
Want to learn more? Buy this book.
Want to learn more? Buy this book.
My new book, UX for Lean Startups, will help you learn how to build great products. It also includes all sorts of Best Practices and when you should use them.
Thursday, April 18, 2013
10 Reasons Founders Should Learn to Design
I know, I know. Founders and entrepreneurs are already being told that they need to learn how to code, hire, raise money, and get customers.
Screw that. What founders and entrepreneurs should really do is learn how to build a great, usable, useful product. And that means learning the fundamentals of research and design.
Don't believe me? Here are 10 reasons you should learn to be your own UX designer (or at least learn enough about UX design to fake it).
Screw that. What founders and entrepreneurs should really do is learn how to build a great, usable, useful product. And that means learning the fundamentals of research and design.
Don't believe me? Here are 10 reasons you should learn to be your own UX designer (or at least learn enough about UX design to fake it).
- You can't build a great product if you don't know what problem it solves for which people. UX design and research helps you figure that out.
- The only thing harder to find than a great designer is a unicorn.
- It is almost impossible to judge somebody else's UX design skills unless you have designed things yourself.
- The only thing more expensive than a great designer is a faberge egg. Sitting on top of a unicorn.
- It's much easier to manage somebody who is doing a job you truly understand.
- Jason Putorti already has a job.
- UX design is a team sport. You don't want to get picked last for the team, do you?
- You have a million fabulous feature ideas. It's easiest to communicate them to your team and customers through design.
- You should already understand your product and users better than anybody else. This just takes it to the next logical step.
- Adding extra people to the Build>Measure>Learn loop does not make it faster.
Convinced? Great! First, share this list with people!
Now, here's a book to help you learn how to do enough user research and design to get your product into the hands of people who want to buy it.
It's called UX for Lean Startups. It's by me. It will help you learn how to build great products. I promise.
Tuesday, March 19, 2013
Design Hacks - The Talk
I write a lot about user research - generally tips and tricks for people who don't have much experience with it. The reason for this should be obvious. Understanding your user, by any means necessary, is always the first step in creating a compelling product.
Seriously, you can't build a product without understanding the problem you're solving and the people for whom you're solving it. Various forms of research are the best way of understanding people who aren't you. It's really as simple as that.
But I've also seen another common problem. A whole lot of folks have learned how to go out and listen to their customers and understand problems, but they still make bad, hard to use products that don't really solve a problem. It turns out that, while learning your users problems is a necessary first step, it's not the only step.
You also have to be able to create something that people understand and want to use, and you don't do that by simply trying random ideas until one of them sticks unless you have an infinite number of monkeys and typewriters. If you are constrained with respect to monkeys, typewriters, or VC funding, you might want a little guidance on what to do once you understand the problem.
Getting your design closer to right on the first, second, or third try will speed things up considerably. It's hard to learn anything from a badly designed, unusable product other than the fact that people hate badly designed, unusable products. And believe me, that lesson has been learned. Kind of a lot.
That's why I'll be giving a talk at Lean Startup Circle on Wednesday, March 20th. It starts at 6:30. There will be other interesting speakers, as well. You can sign up here: http://sanfrancisco.leanstartupcircle.com/events/102633722/
I'll be talking about Design Hacks. These will include a few general tips on producing a good design. It will also include some (free) resources for getting good design ideas. Time permitting, it will include an example or two of how to think about new features for your product in a way that makes them easier to design.
This talk is NOT for design experts. Sorry, you'll be bored out of your minds.
This talk is perfect for founders and engineers who don't have experience with turning what they know about their users into useful designs. And, as always, I'll be hanging around afterward to answer specific questions about your products.
Once again, to see the talk, sign up here: You can sign up here: http://sanfrancisco.leanstartupcircle.com/events/102633722/
If you want to hear about future events where I'll be speaking, you can follow me on Twitter.
If you like to read about things like Design Hacks, the book is available for pre-order.
Seriously, you can't build a product without understanding the problem you're solving and the people for whom you're solving it. Various forms of research are the best way of understanding people who aren't you. It's really as simple as that.
But I've also seen another common problem. A whole lot of folks have learned how to go out and listen to their customers and understand problems, but they still make bad, hard to use products that don't really solve a problem. It turns out that, while learning your users problems is a necessary first step, it's not the only step.
You also have to be able to create something that people understand and want to use, and you don't do that by simply trying random ideas until one of them sticks unless you have an infinite number of monkeys and typewriters. If you are constrained with respect to monkeys, typewriters, or VC funding, you might want a little guidance on what to do once you understand the problem.
Getting your design closer to right on the first, second, or third try will speed things up considerably. It's hard to learn anything from a badly designed, unusable product other than the fact that people hate badly designed, unusable products. And believe me, that lesson has been learned. Kind of a lot.
That's why I'll be giving a talk at Lean Startup Circle on Wednesday, March 20th. It starts at 6:30. There will be other interesting speakers, as well. You can sign up here: http://sanfrancisco.leanstartupcircle.com/events/102633722/
I'll be talking about Design Hacks. These will include a few general tips on producing a good design. It will also include some (free) resources for getting good design ideas. Time permitting, it will include an example or two of how to think about new features for your product in a way that makes them easier to design.
This talk is NOT for design experts. Sorry, you'll be bored out of your minds.
This talk is perfect for founders and engineers who don't have experience with turning what they know about their users into useful designs. And, as always, I'll be hanging around afterward to answer specific questions about your products.
Once again, to see the talk, sign up here: You can sign up here: http://sanfrancisco.leanstartupcircle.com/events/102633722/
If you want to hear about future events where I'll be speaking, you can follow me on Twitter.
If you like to read about things like Design Hacks, the book is available for pre-order.
Monday, February 25, 2013
Don't Make Your Users Feel Like Idiots
I’m a smart person. I’ve been using the Internet since the early 1990s. I know how to program. I only feel the need to point this out, because I’m about to share with you a story in which I come across as a complete, blithering idiot, and I’m feeling a little defensive about it.
I got an email from an event that I won’t name, but I’m guessing a few of you are getting emails of your own. If you didn’t make the same mistake, then bask in the glory of being better at computers than I am. If you did make the same mistake, welcome to the club. You’re not alone.
The email I received was several paragraphs long and told me all the places and times where I could pick up my badge for the event. It also said that they were introducing something new this year called a QuickCode. The email instructed me to bring my photo id and my QuickCode to pick up my badge.
Then it had the following line:
Laura Klein’s QuickCode:
That’s it. After that, it went on to give me more badge-related information. “Aha, I thought. The automated system has failed to print my QuickCode.”
I immediately wrote back and said that I didn’t get my QuickCode. To the credit of the organization, I was immediately written back to by a very polite person who explained that the QuickCode was an image and even gave me instructions on how to turn on images in my email, in case I didn’t know how.
I was, as you might imagine, embarrassed. I mean, of course I know how to show images in an email. I just want to make that clear, because I’m coming off as enough of an idiot without you thinking I can’t use Gmail.
What I didn’t know was that the QuickCode was an image. Because I’ve never seen a QuickCode. Because a QuickCode wasn’t a thing to me until an hour ago. Because a QuickCode is just a name that somebody made up for a bar code that they’re using to help with their badging system.
Obviously the people writing the email knew what a QuickCode was, so it wasn’t at all surprising to them that you’d have to turn on images to see one. For those of us (ok, me) who had never heard of a QuickCode, this wasn’t immediately obvious. A QuickCode could just as easily have been a string of numbers and letters that could have been printed in the email. Of course, when I went back and re-read the email, the first paragraph did mention “scanning” the quick code, so I might have figured out what it was, but there were a lot of paragraphs in the email that I quickly skimmed. This is not unusual user behavior.
The interesting thing is that they could have avoided my acting like an idiot and subsequently having to deal with my support email by just including the phrase, “If you don’t see your QuickCode, try turning on images in your email.” They could have made that the alt text for the image so only people who didn’t have images turned on would see it.
Why am I telling you all this? I’m telling you this because we make assumptions of this sort in our interfaces every day. We assume people know that a QuickCode is an image, even though they’ve never heard of a QuickCode. We assume people know what our products do, even though they’ve never heard of our product. We assume people know where to go within our products to find the things they’re looking for, even though they weren’t in the meeting where we determined our product structure.
We are almost always wrong.
The moral of this story is not (just) that your users are going to do stupid things sometimes. It’s not even that they’re probably only going to skim our very long emails. The moral is that we constantly need to be asking ourselves what we really expect a user to understand about our product, and we need to have ways to preemptively help them in places where we’re presenting new concepts or unfamiliar terminology.
Users don’t know our slang. They don’t know our jargon. They don’t know our product. If we want them to use our products successfully, we need to teach them what they need to know without making them feel like idiots.
I got an email from an event that I won’t name, but I’m guessing a few of you are getting emails of your own. If you didn’t make the same mistake, then bask in the glory of being better at computers than I am. If you did make the same mistake, welcome to the club. You’re not alone.
The email I received was several paragraphs long and told me all the places and times where I could pick up my badge for the event. It also said that they were introducing something new this year called a QuickCode. The email instructed me to bring my photo id and my QuickCode to pick up my badge.
Then it had the following line:
Laura Klein’s QuickCode:
That’s it. After that, it went on to give me more badge-related information. “Aha, I thought. The automated system has failed to print my QuickCode.”
I immediately wrote back and said that I didn’t get my QuickCode. To the credit of the organization, I was immediately written back to by a very polite person who explained that the QuickCode was an image and even gave me instructions on how to turn on images in my email, in case I didn’t know how.
I was, as you might imagine, embarrassed. I mean, of course I know how to show images in an email. I just want to make that clear, because I’m coming off as enough of an idiot without you thinking I can’t use Gmail.
What I didn’t know was that the QuickCode was an image. Because I’ve never seen a QuickCode. Because a QuickCode wasn’t a thing to me until an hour ago. Because a QuickCode is just a name that somebody made up for a bar code that they’re using to help with their badging system.
Obviously the people writing the email knew what a QuickCode was, so it wasn’t at all surprising to them that you’d have to turn on images to see one. For those of us (ok, me) who had never heard of a QuickCode, this wasn’t immediately obvious. A QuickCode could just as easily have been a string of numbers and letters that could have been printed in the email. Of course, when I went back and re-read the email, the first paragraph did mention “scanning” the quick code, so I might have figured out what it was, but there were a lot of paragraphs in the email that I quickly skimmed. This is not unusual user behavior.
The interesting thing is that they could have avoided my acting like an idiot and subsequently having to deal with my support email by just including the phrase, “If you don’t see your QuickCode, try turning on images in your email.” They could have made that the alt text for the image so only people who didn’t have images turned on would see it.
Why am I telling you all this? I’m telling you this because we make assumptions of this sort in our interfaces every day. We assume people know that a QuickCode is an image, even though they’ve never heard of a QuickCode. We assume people know what our products do, even though they’ve never heard of our product. We assume people know where to go within our products to find the things they’re looking for, even though they weren’t in the meeting where we determined our product structure.
We are almost always wrong.
The moral of this story is not (just) that your users are going to do stupid things sometimes. It’s not even that they’re probably only going to skim our very long emails. The moral is that we constantly need to be asking ourselves what we really expect a user to understand about our product, and we need to have ways to preemptively help them in places where we’re presenting new concepts or unfamiliar terminology.
Users don’t know our slang. They don’t know our jargon. They don’t know our product. If we want them to use our products successfully, we need to teach them what they need to know without making them feel like idiots.
Wednesday, February 20, 2013
Combining Qualitative & Quantitative Research
Designers are infallible. At least, that’s the only conclusion that I can draw, considering how many of them flat out refuse to do any sort of qualitative or quantitative testing on their product. I have spoken with designers, founders, and product owners at companies of all sizes, and it always amazes me how many of them are so convinced that their product vision is perfect that they will come up with the most inventive excuses for not doing any sort of customer research or testing.
Before I share some of these excuses with you, let’s take a look at the types of research I would expect these folks to be doing on their products and ideas.
Quantitative Reserach
When I say quantitative research in this context, I’m talking about a/b testing, product analytics, and metrics - things that tell you what is happening when users interact with your product. These are methods of finding out, after you’ve shipped a new product, feature, or change, exactly what your users are doing with it.Are people using the new feature once and then abandoning it? Are they not finding the new feature at all? Are they spending more money than users who don’t see the change? Are they more likely to sign up for a subscription or buy a premium offering? These are the types of questions that quantitative research can answer.
For a simple example, if you were to design a new version of a landing page, you might run an a/b test of the new design against the old design. Half of your users would see each version, and you’d measure to see which design got you more registered users or qualified leads or sales or any other metric you cared about.
Qualitative Research
By qualitative testing, I mean the act of watching people use your product and talking to them about it. I don’t mean asking users what you should build. I just mean observing and listening to your users in order to better understand their behavior.You might do qualitative testing before building a new feature or product so that you can learn more about your potential users’ behaviors. What is their current workflow? What is their level of technical expertise? What products are they already using? You might also do it once your product is in the hands of users in order to understand why they’re behaving the way they are. Do they find something confusing? Are they getting lost or stuck at a particular point? Does the product not solve a critical problem for them?
For example, you might find a few of your regular users and watch them with your product in order to understand why they’re spending less money since you shipped a new feature. You might give them a task in order to see if they could complete it or if they got stuck. You might interview them about their usage of the new feature in order to understand how they feel about it.
Excuses, Excuses
While it may seem perfectly reasonable to want to know what your users are really doing and why they are doing it, a huge number of designers seem really resistant to performing these simple types of research or even listening to the results. I don’t know why they refuse to pay any attention to their users, but I can share some of the terrible excuses they’ve given me.A/B Testing is Only Good for Small Changes
I hear this one a lot. There seems to be a misconception that a/b testing is only useful for things like button color and that by doing a/b testing you’re only ever going to get small changes. The argument goes something like, “Well, we can only test very small things and so we will test our way to a local maximum without ever being able to really make an important change to our user experience.”This is simply untrue.
You can a/b test anything. You can show two groups of users entirely different experiences and measure how each group behaves. You can hide whole features from users. You can change the entire checkout flow for half the people buying things from you. You can test a brand new registration or onboarding system. And, of course, you can test different button colors, if that is something that you are inclined to do.
The important thing to remember here is that a/b testing is a tool. Itʼs agnostic about what youʼre testing. If youʼre just testing small changes, youʼll only get small changes in your product. If, on the other hand, you test big things - major navigation changes, new features, new purchasing flows, completely different products - then youʼll get big changes. And, more importantly, you’ll know how they affected your users.
Quantitative Testing Leads to a Confused Mess of an Interface
This is one of those arguments that has a grain of truth in it. It goes something like, “If we always just take the thing that converts best, we will end up with a confusing mess of an interface.”Anybody who has looked at Amazonʼs product pages knows the sort of thing that a/b testing can lead to. They have a huge amount of information on each screen, and none of it seems particularly attractive. On the other hand, they rake in money.
Itʼs true that when youʼre doing lots of a/b testing on various features, you can wind up with a weird mishmash of things in your product that donʼt necessarily create a harmonious overall design. You can even wind up with features that, while they improve conversion on their own, end up hurting conversion when they’re combined.
As an example, letʼs say youʼre testing a product detail page. You decide to run several a/b tests simultaneously for the following new features:
- customer photos
- comments
- ratings
- extended product details
- shipping information
- sale price
- return info
Again, this is not the fault of a/b testing – or in this case, a/b/c/d/e testing. This is the fault of a bad test. You see, itʼs not enough that you run an a/b test. You have to run a good a/b test. In this case, just because the addition of a particular feature to your product page improved conversions doesn’t mean that adding a dozen new features to your product page will increase your conversion.
In this instance, you might be better off running several a/b tests serially. In other words, add a feature, test it, and then add another and test. This way you’ll be sure that every additional feature is actually improving your conversion. Alternatively, you could test a few different versions of the page with different combinations of features to see which converts best.
A/B Testing Takes Away the Need For Design
For some reason, people think that a/b testing means that you just randomly test whatever crazy shit pops into your head. They envision a world where engineers algorithmically generate feature ideas, build them all, and then just measure which one does best.This is just absolute nonsense.
A/B testing only specifies that you need to test new designs against each other or against some sort of a control. It says absolutely zero about how you come up with those design ideas.
The best way to come up with great products is to go out and observe users and find problems that you can solve and then use good design processes to solve them. When you start doing testing, youʼre not changing anything at all about that process. Youʼre just making sure that you get metrics on how those changes affect real user behavior.
Letʼs imagine that youʼre building an online site to buy pet food. You come up with a fabulous landing page idea that involves some sort of talking sock puppet. You decide to create this puppet character based on your intimate knowledge of your user base and your sincere belief that what they are missing in their lives is a talking sock puppet. Itʼs a reasonable assumption.
Instead of just launching your wholly re-imagined landing page, complete with talking sock puppet video, you create your landing page and show it to only half of your users, while the rest of your users are stuck with their sad, sock puppet-less version of the site. Then you look to see which group of users bought more pet food. At no point did the testing process have anything to do with the design process.
Itʼs really that simple. Nothing about a/b testing determines what youʼre going to test. A/B testing has literally nothing to do with the initial design and research process.
Whatever youʼre testing, you still need somebody who is good at creating the experiences youʼre planning on testing against one another. A/B testing two crappy experiences does, in fact, lead to a final crappy experience. After all, if youʼre looking at two options that both suck, a/b testing is only going to determine which one sucks less.
Design is still incredibly important. It just becomes possible to measure designʼs impact with a/b testing.
There’s No Time to Usability Test
When I ask people whether they’ve done usability testing on prototypes of major changes to their products, I frequently get told that there simply wasn’t time. It often sounds something like, “Oh, we had this really tight deadline, and we couldn’t fit in a round of usability testing on a prototype because that would have added at least a week, and then we wouldn’t have been able to ship on time.”The fact is you don't have time NOT to usability test. As your development cycle gets farther along, major changes get more and more expensive to implement. If you're in an agile development environment, you can make updates based on user feedback quickly after a release, but in a more traditional environment, it can be a long time before you can correct a big mistake, and that spells slippage, higher costs, and angry development teams. Even in agile environments, it’s still faster to fix things before you write a lot of code than after you have pissed off customers who are wondering why you ruined an important feature that they were using.
I know you have a deadline. I know it's probably slipped already. It's still a bad excuse for not getting customer feedback during the development process. You're just costing yourself time later. I’ve never known good usability testing to do anything other than save time in the long run on big projects.
Qualitative Research Doesn’t Work Because Users Don’t Know What They Want
This is possibly the most common argument against qualitative research, and it’s particularly frustrating, because part of the statement is quite true. Users aren’t particularly good at coming up with brilliant new ideas for what to build next. Fortunately, that doesn’t matter.Let’s make this perfectly clear. Qualitative research is NOT about asking people what they want. At no point do we say, “What should we build next?” and then relinquish control over our interfaces to our users. People who do this are NOT doing qualitative research.
Qualitative research isn’t about asking people what they want and giving it to them. Qualitative research is about understanding the needs and behaviors of your users. It’s about really knowing what problem you’re solving and for whom.
Once you understand what your users are like and what they want to do with your product, it’s your job to come up with ways to make that happen. That’s the design part. That’s the part that’s your job.
It’s My Vision - Users Will Screw it Up
This can also be called the "But Steve Jobs doesn't listen to users..." excuse.The fact is, understanding what your users like and don't like about your product doesn't mean giving up on your vision. You don't need to make every single change suggested by your users. You don't need to sacrifice a coherent design to the whims of a user test. You don’t even need to keep a design just because it converts better in an a/b test.
What you do need to do is understand exactly what is happening with your product and why. And you can only do that by gathering data. The data can help you make better decisions, but they don’t force you to do anything at all.
Design Isn’t About Metrics
This is the argument that infuriates me the most. I have literally heard people say things like, “Design can’t be measured, because design isnʼt about the bottom line. Itʼs all about the customer experience.”Nope.
Wouldnʼt it be a better experience if everything on Amazon were free? Be honest! It totally would.
Unfortunately, it would be a somewhat traumatic experience for the Amazon stockholders. You see, we donʼt always optimize for the absolute best user experience. We make tradeoffs. We aim for a fabulous user experience that also delivers fabulous profits.
While itʼs true that we donʼt want to just turn our user experience design over to short term revenue metrics, we can vastly improve user experience by seeing which improvements and features are most beneficial for both users and the company.
Design is not art. If you think that thereʼs some ideal design that is completely divorced from the effect itʼs having on your companyʼs bottom line, then youʼre an artist, not a designer. Design has a purpose and a goal, and those things can be measured.
So, What’s the Right Answer?
If you’re all out of excuses, there is something that you can do to vastly improve your product. You can use quantitative and qualitative data together.Use quantitative metrics to understand exactly what your users are doing. What features do they use? How much do they spend? Does changing something big have a big impact on real user behavior?
Use qualitative research to understand why your users do what they do. What problems are they trying to solve? Why are they dropping out of a particular task flow when they do? Why do they leave and never come back.
Let’s look at an example of how you might do this effectively. First, imagine that you have a payment flow in your product. Now, imagine that 80% of your users are not getting through that payment flow once they’ve started. Of course, you wouldn’t know that at all if you weren’t looking at your metrics. You also wouldn’t know that the majority of people are dropping out in one particular place in the flow.
Next, imagine that you want to know why so many people are getting stuck at that one place. You could do a very simple observational test where you watch four or five real users going through the payment flow in order to see if they get stuck in the same place. When they do, you could discuss with them what stopped them there. Did they need more information? Was there a bug? Did they get confused?
Once you have a hypothesis about what’s not working for people, you can make a change to your payment flow that you think will fix the problem. Neither qualitative nor quantitative research tells you what this change is. They just alert you that there’s a problem and give you some ideas about why that problem is happening.
After you’ve made your change, you can run an a/b test of the old version against the new version. This will let you know whether your change was effective or if the problem still exists. This creates a fantastic feedback loop of information so that you can confirm whether your design instincts are functioning correctly and you’re actually solving user problems.
As you can hopefully see from the example, nobody is saying that you have to be a slave to your data. Nobody is saying that you have to turn your product vision or development process over to an algorithm or a focus group. Nobody is saying that you can only make small changes. All I’m saying is that using quantitative and qualitative research correctly gives you insight into what your users are doing and why they are doing it. And that will be good for your designs, your product, and your business.
Like the post?
Monday, February 4, 2013
Make Meetings Less Awful
Meetings are the worst. I mean, my God, they suck. The vast majority of meetings are simply awful.
But they don’t have to be!
If you’ve ever been in a meeting where you felt like your soul was being sucked out of your body through your eyes, I have a few tips that will make future meetings more tolerable. If you implement them correctly, they might even make some of your meetings useful! Imagine that.
Write It Down Ahead of Time
Agendas. You should have one. Well, this seems painfully obvious, doesn’t it? But seriously. How many meetings do you attend where there isn’t a single person who knows exactly what you’ll be talking about in the meeting beforehand?
Here’s a simple solution for making meetings wildly more productive. The person who is in charge of the meeting needs to make an agenda and send it out to all the attendees before the meeting. A full day is great, especially if there are things that people might want to research in preparation for the meeting. Even a few hours is helpful. It’s best if the person in charge reaches out to attendees early to see if they have anything they’d like to see on the agenda.
The corollary to this is that the meeting attendees must actually read the agenda, understand what will be discussed, and come to the meeting prepared to discuss and make a decision on any of the agenda items they care about.
And, of course, if they don’t care about any of the agenda items, they probably shouldn’t attend the meeting.
Another, slightly more spontaneous, method is the box on the whiteboard. We used to do this in engineering meetings at IMVU. Before the weekly eng meeting started, people could add topics they wanted to discuss to a list on the whiteboard. Once the meeting started, someone drew a box around the list. Nothing could be added to the list once we started, and nothing was discussed that wasn’t in the box. As a bonus, it encouraged people to get to the meeting early if they had a topic to discuss.
Everything Has a Next Step
Meetings are not open ended discussion forums. They’re not group therapy sessions. Meetings are for making decisions. Every single thing you discuss in a meeting should have an decision and a deliverable.
Here’s an example. Once, I was in a meeting to talk about a change somebody wanted to make to a product’s design. We sat together for half an hour discussing the types of research she could do to figure out whether the design would work or whether it was small enough just to ship. At the end of about 30 minutes, she announced, “Well, I don’t think we’re going to decide this now.” To which I responded, “Why the hell not?”
Stop having discussions just to have discussions. Refusing to make a decision in this meeting just ensures that you need to have another meeting later, and nobody wants that. Make sure that all agenda items at meetings have outcomes. Sometimes the outcome will be, “Susan is going to go off and investigate these three questions and report back so that we can make a more informed decision.” Sometimes the outcome will be, “Laura is in charge of building a prototype and will pull in whomever she needs to help.” Sometimes the outcome will be, “We’re shipping this damned thing as soon as we leave the room.” I kind of wish that were always the outcome.
The outcome will never be, “Well, we need to think more about this.” The problem with this statement is that it’s too vague. There is nothing actionable about this. Nobody is assigned to do anything, so nothing will really get done, and the next time the point comes up, you’ll have to have the whole conversation over again. Everything from a meeting needs a specific next step and somebody who is assigned to take it.
Fewer Attendees
Meetings become far less productive after about four people, so whenever possible, keep meetings as small as you can. Obviously you sometimes need to have more folks, but really ask yourself whether everybody needs to be in the meeting, or if somebody would do just as well with a quick report after the fact.
If there are people who routinely aren’t contributing to the meeting in any way - no agenda items, no adding to the discussion, no making decisions, no deliverables after the fact - then they are great candidates for not getting an invitation next time. Presumably you’re paying these people, and I have to imagine there is something more productive they could be doing than sitting in a meeting checking their email.
Every Meeting Has a Leader
Someone has to be in charge of the meeting. Always.
The person in charge of the meeting has a lot of responsibilities. The leader must make the agenda, keep everybody on track, mediate disputes, ensure that everybody who has a contribution gets to make that contribution, make sure that all the deliverables and next steps are being captured, and follow up on the things that come out of the meetings.
I was in a meeting once that was led by a particularly ineffective PM. We were discussing what the priorities would be for her product (don’t even get me started on why engineers and designers were discussing this when it was so clearly her job). We were each giving our opinions about what should be done first, and the discussion began to get heated.
Instead of stepping in and guiding the discussion or just deciding what order we’d build things in, the PM sat back and let everybody scream at each other. The meeting ended with someone in tears (unsurprisingly, this person wasn’t me) and no decision made about prioritization.
Unless somebody is in charge, meetings just meander and go on for three times as long as they need to with nobody who is willing or able to say, “Right. We’re done here. Let’s go do something productive.” Having someone whose job it is to end discussion and assign tasks makes things go much more smoothly and quickly.
Besides, if we actually expected some work from the people who call all those meetings, maybe they’d call fewer damned meetings.
No Broadcast Meetings
I’m going to assume that everybody working for your company is literate. If this is true, please stop having meetings where you read things to them. You’re not in kindergarten. This is not story time.
I have been to too many meetings where a PM or CEO or somebody else who should know better shows a slide deck and then proceeds to read all the slides to the audience for an hour.
Here’s an idea: send the deck out the day before. Tell people to read it for themselves and come up with questions. At the meeting, spend no more than five minutes summarizing the most important things about the slide deck (“We made more money this month than last month! Yay!”), then take questions from the audience about the rest of the deck.
If you are concerned that people will miss critical information because they are failing to read important emails, that’s really something that you need to address separately. I’ve found that reducing meeting times by a few hours a week gives people far more time to read their email or to do something actually productive.
More Discussions, More Working Sessions, Fewer Meetings
You know what I like more than meetings (besides everything)? I like discussions. Discussions are things that happen between two or three people who are all interested in and informed about a particular topic. They tend to happen in hallways and they often help disseminate important information to the people who need it.
I also like working sessions, in which a few people all work together on something like a design or code. Working sessions generally involve a lot of writing on whiteboards or pair programming or gathering around somebody’s screen to try different variations of a particular wireframe. Working sessions are better than even good meetings because by the end of the working session, you’re often done with whatever it was you were going to just talk about in the meeting.
And maybe that’s the most important point here. Meetings are not conducive to DOING. They are conducive to TALKING. Talking is the enemy of doing. By making a few small changes in the way you conduct your meetings, you can turn them into places where things get done rather than just talked about. And that will make meetings suck a whole lot less. I promise.
Tuesday, January 22, 2013
To Kill or Not to Kill
After my rant last week about product managers, the excellent Joshua Porter (@bokardo) made a great point about it. He said, “In my own experience the hard part is knowing when to kill something vs. when to give it more breathing room, as sometimes a really new idea can’t really be tested in low fidelity.”
As much as I’d love to send my pageviews soaring by starting a flame war with somebody popular on the internet, I have to admit that he’s 100% right. Killing a feature or product is exceptionally difficult. It’s tough to know when to do it. It’s tough to figure out if you made the right decision. And it’s tough emotionally to let go of something you really thought was going to be great.
First, let’s talk a bit about why you kill products or features. You kill them because they’re not succeeding or because you don't expect them to succeed. That could mean that they’re not getting enough traction or because you’ve determined that they’re never going to turn into an important part of your business. You kill them because the ROI isn’t high enough to justify investing more resources in them. You kill them because they are using resources that would be better spent in other places.
They’re hard to kill precisely because you never know whether they’re just a few days away from taking off and turning into everything you thought they’d be. After all, every successful product went through some period of time before everybody found about it.
So, let’s look at a few questions to ask yourself before killing a product or feature. For the purposes of this post, I'm just going to talk about killing existing features or products. I'll probably address how to decide to kill things before you build them in a future post.
These questions won’t make killing easy, but hopefully, they’ll make it possible.
These questions won’t make killing easy, but hopefully, they’ll make it possible.
Why Isn’t Everyone Using It?
There are four reasons people don’t use your product or feature. Yep. That’s right. There may be thousands of reasons that people do use a product, but there are really only four basic reasons that they don’t.
People don’t use your product because:
- They don’t know about it
- It doesn’t solve a problem for them
- They don’t understand that it will solve a problem for them
- The problem it solves isn’t worth the investment of time, money, or effort
Before you kill your existing product or feature, figure out why it’s not popular. For example, if you’re simply not getting any traffic to your page, it means not very many people know it exists. On the other hand, if you’re getting tons of traffic, but none of it is converting or engaging, then your problem is one of the last three. People are finding your product, but they don't want it or understand it enough to convert.
You can find out if the product solves a serious problem for people by talking to the types of people you expect to have that problem. Develop a persona that represents the sort of person who might suffer from this problem, and interview them about that portion of their lives.
Don’t just ask, “do you suffer from x problem?” Have them tell you stories about their real life experiences in situations where they might have experienced the problem. For example, if you’re testing to see if people need turn by turn navigation on their phones, you might ask them to tell you about the last time they were trying to get somewhere and they got lost. Then you might ask how often that happens. If it’s extremely rare, they probably don’t have the problem you’re trying to solve.
If you’re trying to figure out if they understand the problem your product or feature solves, you can do that by showing them your product or feature (or a mockup or prototype) and asking them to tell you about it. Don’t prompt or prep them. Just show them the product and say, “Tell me what this does. Who do you think it’s for?” You will be shocked by how often your perfectly crafted prose and imagery cause nothing but blank stares.
When determining whether or not your product is worth getting, don’t forget that money isn’t everything to your potential users. Sometimes there are switching costs that they’ll have to deal with or just the cognitive load of learning how to use a new product. I can’t tell you how often I’ve seen people stick with a completely suboptimal solution to a problem, just because that’s what they’re used to.
Regardless of which it is, determining the reason people aren't responding positively to your product will go a long way toward telling you whether to kill it or keep it.
Regardless of which it is, determining the reason people aren't responding positively to your product will go a long way toward telling you whether to kill it or keep it.
Who Is Using It?
So, once you’ve determined that there are people who are using your product or who you expect will use it because it solves a serious problem for them at a price they’re willing to pay, it’s time to look at who those people there are and how many of them exist.
A great company with a very engaged group of users recently killed a feature. Unsurprisingly, there was a huge outcry. They got many, many complaints telling them how sad users were that the feature was going away. If they had gone entirely by the comments on the blog post about removing the feature, they would have been justified in thinking that they were making a huge mistake.
Luckily, they didn’t do that.
It turned out that the number of people using the feature was an incredibly small percentage of their user base. More importantly, the people using the feature were not, by and large, paying customers. In other words, a couple percent of very vocal users who didn’t earn a cent for the company were upset by the removal.
While it’s always best to avoid making your users angry, there are certainly users that it’s safer to anger than others. Keeping a feature or product that is disproportionately useful to people who aren’t benefitting your business in some real way means that you have fewer resources to devote to things that might make you some money.
The other thing to consider here is how many people you might reasonably expect to have use this product if everybody knew about it. Unless there is a huge potential market for your feature or the small market that exists is willing to pay quite a lot to use it, you may want to consider killing it.
Note: for those few people who inevitably write to me and complain that “it’s not all about money,” I would like to point out that it very frequently does have to be about money or you will go out of business. If you want to keep your 10 free users super happy, you go right ahead. I’m going to cater to the large number of folks who pay me.
And yes, I do understand the difference between long term and short term gains, and I expect my readers do, as well. Assume I'm optimizing for lifetime value here and not simply what makes the most money right this second.
What Is the Actual Cost of Keeping It?
Now that we’ve determined that people are using your product or feature, we should figure out how much it costs to keep your product or feature alive.
Of course, if you’re talking about your whole product, this math is relatively easy. The only hidden cost to keeping your product alive is the opportunity cost of building something else. If you’re working on your current product, you can’t be working on something more promising.
However, if you’re talking about a piece of your overall product, sometimes it can be harder to figure out how much it costs to keep a feature alive. Obviously there is the cost of the engineers or customer support people or sales people, but often they’re working on other things as well, so it’s not clear that cutting any particular feature will really save you any money. If even a few people are using a feature and it’s already built, why not just let it hang around indefinitely?
Well, consider some of these hidden costs to keeping a feature:
- Bug fixes
- Customer support
- A more complicated code base
- A more complicated user interface (more features means more cognitive load on your new users)
- Server and infrastructure costs
- Additional work if you decide to do a site redesign or visual refresh
These may seem like small things, and in some cases they are, but don’t ever think that a feature is free just because you no longer are actively building it.
Of course, there's another alternative, which is to continue to iterate on the feature or product. This obviously adds hugely to the expected costs. Let's say that you create a new search feature for your product, and very few people end up using it. The actual cost of that feature needs to include all the iterations and changes that you're willing to try before people start using it or you give up on it.
Of course, there's another alternative, which is to continue to iterate on the feature or product. This obviously adds hugely to the expected costs. Let's say that you create a new search feature for your product, and very few people end up using it. The actual cost of that feature needs to include all the iterations and changes that you're willing to try before people start using it or you give up on it.
What Is the Actual Cost of Killing It?
In a similar vein, sometimes things can cost more to kill than you think. Unhappy users can cause trouble in forums or for support staff.
Of course, I did mention above that it’s sometimes acceptable (and inevitable) to annoy some of your users, but don’t underestimate the work that it will take to keep that unhappiness from spreading throughout your entire community.
There are engineering costs to turning off a feature, as well. Either you need to pull it out of your code base or leave it there to rot. Neither of those options is free, although both can be ultimately cheaper than maintaining the feature.
If you’re killing your whole product, you are often throwing away a huge percentage of your customer base. Just because you’re pivoting doesn’t mean that all of your users will pivot with you.
And the same goes for your employees, if you’re lucky enough to have any. Killing a product or significant feature can be absolutely terrible for morale. Obviously you’re not going to keep a failing product just to make your employees happy, but make sure that you are prepared for the fallout – and possibly resignations - when you do decide to make a major change.
So, Should You Kill It?
In the end, it really comes down to the expected ROI, and the future is notoriously difficult to predict. Good customer development techniques can help you get a clearer idea of the eventual potential of a product or feature that seems to be failing. An honest assessment of real costs can help you determine the investment that you're really making into the product.
But it's an art, not a science. In the end, you're still going to have to make the decision. And it may be the toughest decision you ever make as an entrepreneur. Good luck!
Like the post? Here's what you should do next:
Like the post? Here's what you should do next: