You Are Not Your User - Even When You Make Employees Dash

I wrote a thing recently on LinkedIn. I’m reposting it here, as well. For reference, here’s one of the many articles about the policy that I’m writing about. As with all tech reporting, I have no idea if the policy is accurately reflected in the article, but I’ve seen lots of similar programs, and I’m writing about how they can be implemented badly.

Recently DoorDash announced that they were going to make all their corporate employees do at least one delivery per month. Companies do this sort of "eating their own dogfood" or (my preference) "drinking their own champagne" as an empathy building exercise, and I'm all for empathy! You'd think I'd be all for this, since I'm a huge advocate for really understanding your users, and the gig workers who rely on DoorDash for a living are absolutely some of the most important users that DoorDash has.

Unfortunately, I think this sort of blanket program is empathy theater, and it's almost certainly going to be less effective and more expensive than several other research methodologies. This is like driving your neighbor to the airport and thinking that you suddenly have insight into what it's like to be an Uber driver.

Why is this ineffective and occasionally harmful?

First of all, simply doing one delivery a month doesn't give you a real sense for what being a gig worker is like. What it does is give you the feeling that you know what being a gig worker is like. That's very different, and it can be dangerous because it can lead you to prioritize changes that make your experience better rather than changes that would make actual users' experiences better.

For example, consider an engineer making mid 6 figures in salary, living in the Bay Area, making a single delivery. Maybe they take an afternoon off from their job (with pay!) to do it. They likely have a reliable car or at least access to one, or if they don't, then they can almost certainly afford an Uber. They're not depending on the tips they make to pay their rent. They're not juggling multiple delivery and driving apps. They're not worried about where they're going pee during a 12 hour shift.

So, what will the team learn from this experience? Sure, they'll get a bit of experience of what the app is like to use (again, under practically ideal circumstances). They might find a bug or two or a bad user experience for new or infrequent users.

That last bit is important, because this sort of naive testing can lead the team to prioritize problems mostly encountered by new or infrequent users rather than ever diving deep into the power users who, especially in this case, are the ones who are contributing the most to your product.

Why is this such an inefficient way of learning?

We talk a lot about how it's important that everybody on the team understands the users, but how much does it really help anybody if Pat from Finance delivers a burger once a month? Is the systems reliability engineer going to do their job differently if they discover a bug in the onboarding flow? Should they? If it's a bad enough bug, shouldn't the team already know about it from their user research? Do we need to pay a back end engineer to report it?

And that's the thing...there is really nothing that anybody is going to learn from this that they couldn't find out from their (probably ignored) user researchers! It's possible that doing a delivery might get some folks to understand a bit more viscerally a few of the problems that they're creating, but again, this is only a tiny piece of the whole experience.

So what could they do instead?

  1. They could shadow customer service calls/emails/chats for a day every month. This would give them a much better sense of the different types of problems people are experiencing and the frequency with which they occur.

  2. They could watch well moderated interviews with different types of delivery workers explaining some of the problems they encounter.

  3. They could arrange for occasional ride alongs with actual drivers who could give them more insight into what their days are really like. (The drivers would be compensated for their time, of course!)

  4. They could engage in participatory design and research sessions with various types of delivery folks to get early feedback on features and more insight into actual usage.

In other words, they could find more ways to listen to their actual users rather than doing some kind of weird gig worker LARP and then thinking that they know everything about the job.

How to Build a Task Flow Part 2: Combining Modules

In my last post, I walked you at length through the creation of a pretty simple task flow. It was exhausting! This time, I’m going to walk you through an even simpler one, but this one has a few interesting twists. 

In this flow,, we’re going to design the Create Playlist module I talked about previously. If you recall, the final flow looked like this: 

step 9 alt.png

You see that box with lines down the side? That’s a module, and as I mentioned in the previous post, it indicates that a whole bunch of things are going to happen there. It’s shorthand for “Jump to a flow called Create List here”.

Well, this is where we define what that module looks like. Again, in coding terms, it’s a lot like calling a function inside of another function. 

Yes, we could have put all the boxes that make up Create List into the diagram above instead of splitting them out, but we didn’t for Reasons. Read the other blog post to understand those reasons. I don’t want to copy/paste them here (HINT: NOT COPY/PASTING IS ONE OF THE REASONS, IT’S ALL VERY META). 

The Task

First, we define the task for our module. We’re going to let the user Create a Playlist, and we’re making some assumptions about the fictional app we’re designing:

  • This app lets users make multiple playlists.

  • This app has a lot of songs to add to a playlist - like a lot a lot. We can assume it’s a music subscription service.

  • All playlists should have different names, since it sucks to look through a big list of things all named “playlist.” 

  • All playlists need a name, and we’re going to limit the length of the name and the characters we allow in it so that we don’t destroy the database or the UI.

  • There is no upper limit to the number of playlists a user can make. Is this a good decision? Maybe? It’s unlikely that a person will go through and make millions of playlists by hand. What would be the point? If it was something where that was a concern or if it was something that people could somehow automate, I’d set some large upper limit, but for now, fine - make a million playlists if it makes you happy. I have absolutely no idea why it would. 

Let’s flow!

Thinking Through the Flow

First we’ve decided that playlists need names, and we’ve decided that we’re going to make the user assign those names. This, of course, is a design decision totally independent of the flow itself. You make this call. Could you decide to name the playlists automatically? Sure! In that case, we might not even need a flow, unless we wanted to catch some possible system errors, but honestly, I’d probably just write those up as a separate error message doc. 

But no. We’re letting people name their precious playlists. So, here’s what that could look like. The process box here is asking for the user to input a title. 

step1.png

Are we done yet? Well, no. 

We have that title now. But remember in our task assumptions we had a few things we needed to check about the title. Like whether it’s a duplicate. 

step2.png

Again, to review the last blog post, the diamonds are decision points. In this case, the system is taking the input from the Input Title box and checking to see if it’s a duplicate of a title that already exists. If not, we’re good. Playlist created. Huzzah. If it is a duplicate, we send the user back to input a title. 

If it looks like we’ve created a potential infinite loop, we have! If a user just keeps entering the same name or picking the names of other playlists, they’re gonna be here awhile. It’s really important to give them a bit of a note about why they’re being sent back to the Input Title screen, because otherwise, they’re just going to keep ending back up where they started, and they’ll think the the app is broken.

Also, I want to be very clear about something here. I’m using phrases like “sent back to the Input Title screen” but that’s not exactly right. Say it with me, “TASK FLOWS ARE UI INDEPENDENT!” That means that these aren’t all separate screens. If it’s a voice UI, they’re not screens at all. These are processes. In this particular case, when a user adds a duplicate title, they’re not sent to some other screen with an error message that they then get sent back from. I mean, you could certainly design the interface later to have everything on separate screens, but you really shouldn’t, because it’s a bad experience, typically. Ideally, users should be given some kind of inline error message so they can just change the title of the playlist right there. 

But whatever else happens, you need to give the user clear, specific feedback about WHY the action failed and how they can correct it! That has nothing to do with task flows, by the way. That’s just being a decent designer and a reasonable human. I don’t put the actual text of error messages on flows, because those tend to get word smithed later, and this really isn’t the right sort of document to include that sort of text, mostly because it’s a bit harder to update these than, say, a spreadsheet.

Of course, one other fun design decision would be to make a suggestion to the user of a similar but acceptable playlist name if they picked a duplicate. To do that, we really don’t need a whole other process box, since we’re just going to keep them on the Input Title screen until they finally get it right, but we do need to mention to the engineers that this is something that needs to be done. You can annotate your flow like this:

step3.png

If you have picky engineers (or engineers with inappropriate senses of humor), you might want to make VERY clear how to suggest a playlist name. Adding numbers to the end of whatever the user originally input is a pretty standard solution and scales pretty well. 

Speaking of appropriate names, let’s make sure they didn’t decide to name their playlist the entire text of Ulysses, shall we? 

step4.png

Again, notice the little note on what the “right length” is. Do you have to note this here? No. Should you probably decide on the right length in collaboration with an engineer who will explain to you in great detail which lengths would make sense to limit for the database schema? Probably. Should you base this decision largely on where exactly in the UI this playlist name will be used to make sure that it’s understandable and readable everywhere it’s used? Oh yes. Very much yes. 

Remember, while task flows themselves are UI independent, many of your actual design decisions will not be! If you know that the playlist name is going to be shown on scrolling digital displays on cars and that those tend to be a certain length, take that into consideration! If you want people tweeting their playlist names for some reason I can’t actually come up with, again, there’s a natural limit! If playlists will be read out by a voice UI, nobody wants to hear a computer read out the entire text of Ulysses! Trust me! I picked 240 characters because it’s a round number and seemed slightly too big for anybody to want as a playlist name.

Ok, great. That seems good. We’re done right? Well...we could make it a bit simpler, which is often a nice thing. Maybe something like this: 

step5.png

The way we had it set up before, first we were checking to see if it was a duplicate and then we were checking to see if the title was the right length. That’s...a bad order to do things in. I mean, unless we’re constantly changing how long titles can be (and frankly, I’ve seen a few systems where this was true…) it shouldn’t be possible for something of the wrong length to be a duplicate of an existing list because how would the existing list have been created with the wrong length? 

Ok, I know exactly how this happens, but it’s bad and should be handled somewhere that isn’t your UX task flow. 

Instead, we can do a very quick little hack where we just annotate the Input Title process with the requirements for the title. In this case, it needs to be between 1 and 240 characters, and it can only contain letters and digits, because I am old and hate emoji and joy. Obviously, if you ever want this to be internationalized, you’d probably want to include at least some unicode support, but that’s absolutely not what this blog post is addressing.

We’re keeping the Is Duplicate diamond split out because that check causes something different to happen on the second time through the flow. We don’t want to prefill a suggested title until the user has added a duplicate playlist title, and that’s much easier to show with a diamond decision point rather than just notes on a process.

When we do design the UI, we’ll have a few different options for restricting length and types of characters. We could, for example, simply not allow the user to move on before they typed in at least one character (disable the next button, etc.), and we could just stop accepting input after 240 characters. We could also simply not register any input that isn’t a letter or a digit or we could show an error message or whatever your global standard is for dealing with inline errors of this type. If you’re working in a design system, you almost certainly have a way of dealing with badly formatted data. If you don’t, well, there’s no time like the present!

All of these things mean, we can keep those checks on the Input Title process. We don’t need to let the user move to the next step (a decision point) in order to evaluate the input. We can evaluate the input for length and inappropriate characters as they enter them, which keeps them on that same Input Title process.

However, since we can’t tell if the title is a duplicate until we’re certain the user has stopped entering it, we need to send the title to the system somehow, which means it needs its own decision point. Basically, in order to move from a process box to whatever the next step is (in this case, the decision point), we assume that the user has finished doing something or indicated that they wish to move on in some way. In a graphical user interface, this is probably going to mean that they finish typing the title and click Next or Done or Create or Make It So, and then we move on to the decision point from the Input Title box.

Again, are there different ways we could do things? Sure! We could make this much more complicated by doing lots more types of checks or adding the ability to make lists public or make them collaborative. Or, as mentioned before, we could make the flow much simpler by having the system assign names to playlists. Everything is a design tradeoff, and our goal is to make things as easy and intuitive as possible for the user without going to absurd lengths on the back end or doing something that’s going to break the UI (ie. we totally get to tell them they can’t name their playlist with the entire text of Ulysses!). 

How Does It Fit In? 

As I mentioned previously, you should be able to copy and paste all of these symbols directly into the Add Song to List flow we created last time and have it all work magically. You should also be able to copy and paste these symbols wherever you’d let the user create a Playlist. You’re not going to do that, because it ends up making your overall set of task flows much harder to read and maintain, but when you’ve got a flow with lots of connected modules and you want to know whether they’ll all work together, you can make a giant Frankenflow (technically it would be Frankenflow’s monster) by pasting all the different boxes together and then just seeing if it works.

Now, you may be wondering, why don’t we let users add songs directly to the playlist they just created in the Create List module? That would make sense, right? Isn’t adding songs to your playlist part of creating your playlist? You’re imagining yourself in a music app of some sort. You’ve just created a Playlist. Now you want to add some songs to it, no? That seems like the logical next step, so why isn’t it just in this flow! 

Before I explain this, I want to note that there are lots of products where you might make a list and have adding items to the list as part of that flow. For example, if you were designing a product that created lists of email addresses for the purposes of sending emails to people (like MailChimp), you’d probably have a flow that started with naming the email list and went straight into adding contacts to it.

But it doesn’t make as much sense in a music subscription type of app, partly because you’re selecting from a practically infinite database of music, which means that to add a song to a playlist, you first have to go find the song you want. Once you’re on the song page, you might want to listen to it first to make sure it’s the version you want or even add it to multiple playlists. You might find it by browsing or searching. 

In other words, regardless of whether you just created a new playlist or not, the only way to really add a song to your playlist is to find a song (somehow, through searching or browsing or other lists or whatever) and then add it to the playlist. That’s the flow we already made in the last blog post! It doesn’t make any sense to also tack adding songs onto the end of THIS flow, because it’s been handled already. You definitely don’t need some special case for finding songs and adding them to the playlist you just created, because you already have a flow for adding songs to ANY playlist. 

You might have a note in the Add Song to Playlist module that makes the latest list created the “default” list or maybe that shows lists in the order they were created, with the newest ones first. That might be a nice feature. But for this sort of set of tasks, we keep the flows separate.

I know, that was all extremely confusing, and I honestly left out about a dozens other reasons why I split things up in this way. Deciding which tasks deserve their own modules and which should go inline in other modules is hard, I’m not going to lie to you. Often, I’ll start making a flow, have it become wildly out of control, and realize I’m dealing with a bunch of different tasks that I then split up into different modules. Other times, I’ll start designing a UI and have to come back and alter the task flow because I realize that something I thought was part of another task is actually something that could get initiated in a lot of places! 

The only way to get better at these things is to make more of them and take your time tracing each path to its conclusion, following every branch to make sure it works out. You’ll get a lot better at making task flows, and you’ll also get a lot better at understanding the complexity of the systems you’re building. 

Like the post? - Tweet it!


Quick note: if you’re a student or newish designer and you want to try to make one of the flows I mentioned above in the post, I’d be happy to give you feedback on it. Just send me a pdf or png of your work at laura@usersknow.com, and let me know what sort of feedback you’d like.

How to Build a Task Flow

When you’re done here, Read Part 2 in the series!

In our podcast, Kate Rutter and I talk a lot about task flows and how helpful they can be when designing a product interaction. The problem is, I’ve rarely met a designer or product manager who understands how to make them well, and I haven’t found a very good guide to building them online. So, I guess I’m writing one? You’re welcome.

Please note, this post is not about how great task flows are or why you would use them. I’ll probably write that up later, or you can listen to the podcast. I’m just going to show you an example of how to make a simple task flow. If it’s helpful, maybe I’ll add another post about how to make a more complex one or how to combine them into a task flow Voltron...I mean a useful system diagram. 

The Task

First, we need to pick a task to show. It should be something that users actually want to achieve. It also should be something pretty standalone. This will make more sense later, but you want to break things down into low level tasks which you can then combine into larger flows. 

For example, if you’re working on a product like Turbotax, “do your taxes” isn’t a single flow. It’s probably a combination of lots and lots of lower level tasks like “add personal information” and “submit state forms” and “enter your payment information,” etc. 

For this post, we’re going with something even simpler. Our task will be to Add a Song to a Playlist. 

A few important points:

  • Task flows are almost always UI independent, so it doesn’t matter if this is on a phone, web, physical device, etc. 

  • We will assume that this flow is part of some sort of music listening service that lets you make multiple playlists. 

  • We’re going to assume that the user has already found a song they like and that the Find a Song flow is a separate one that is not shown here. 

  • We’ll also assume that the user is already logged in and that they have permission to make playlists. 

  • We will also assume that there is SOME widget or affordance (I’m avoiding the term button because, again, this shouldn’t depend on a specific UI element), and that the user has interacted with that affordance to indicate that they want to Add This Song to A Playlist. 

  • Ok fine. The user pushed the Add Song to Playlist button. I HOPE YOU’RE HAPPY. 

So. What happens next? 

Thinking Through the Flow

This should be pretty simple, right? We’re just going to add a song to a playlist. We’ve already found the song. How hard could this be? 

How about something like:

step1.png

A quick note about the shapes. We start and end with little circles. If you’ve done any programming, you can think of those as the function call and the return. If you haven’t, you can think of the start as “the moment that the user indicates in some way that they want to perform this task” and end as “the task is either successful or has been stopped for reasons that should be evident in the flow.” 

If you insist on thinking in visuals, start is when the user pushes the “Add to a Playlist” button and end is when the song goes onto a playlist or fails in some obvious and expected way with an error message.

The box I’m showing there is a process. Something is happening in that process. In this case, the system is adding a song to a playlist. Neat! All done!

Except…

What if there isn’t already a playlist? What will happen then? This is a really common alternate path. In fact, unless you start the user with a default playlist, this is an alternate path that will happen to every single user the first time they try to add a song to a playlist. I hesitate to call it an “edge case” because it’s absolutely guaranteed to happen immediately, so it’s not very edgy. Anyway, we should have our task flow handle that exception.

Let’s handle it like this: 

step 2.png

Hey, we added a shape. Diamonds are decision points. Think of them as the system asking itself a question. “You’ve told me to put a song on a list. Does a list even exist?” Because they’re decision points, diamonds have to have a yes state and a no state. If the answer is yes, we go down the yes path.

In this case, the options are, “If a list exists, go ahead and add this song to the list. If a list doesn’t exist, create one!” Hurray! Done. 

Hahaha. Nope. We’ve wildly oversimplified.

Creating a list is a Whole Thing. Much like adding a song to a playlist, Creating a List is not a simple one step process. It has edge cases, error cases, and even failure cases! If you’re feeling excited about task flows by the end of this post (AND YOU WILL BE!) you can try creating the flow for Create List to see how many ways this very simple process could go wrong!

But we’re not going to show all those boxes and arrows here. If we include too many subprocesses in this flow, our task flow will become awful and unreadable. Also, we may want to initiate creating a playlist elsewhere. Maybe there’s a whole separate interface for creating playlist after playlist elsewhere in the app! Or maybe you can create a playlist from the Add an Album to Playlist flow. If we create the whole flow here, we also have to create it everywhere else it might get called and that’s a lot of copying and pasting, which is bad. Generally speaking, if you have a set of instructions that get called in multiple places, you should abstract them out into their own module so when you inevitably find that you messed something up, you can fix it in just one place. Fun fact: I redid this very simple flow 5 times before I was happy with it! I’m glad I didn’t have to do it in a bunch of different places!

So, for the Create List set of instructions, we’re going to use a sneaky little trick where we just say “Create List is its own Module and we’re going to worry about how it works later!” We do that by using a box with little bars down the side as shown below. 

step 3.png


And yes, if you’ve written code, you’re probably thinking to yourself, “hey, we get it. Create List is its own function and you’re calling it here,” and you’re exactly correct. Good for you.

Bad for us though, because look what happened! We made a dead end! That’s bad! You can tell it’s a dead end because...it literally just stops. If you read through the No track of the flow, you see that we just dump people into Create List and then abandon them there. In other words, they’re going to go create a list, and when they get to the end of that process they’re going to have to go find the song they wanted again and then start the Add to List process all over again. 

Now, admit it. You’ve used products that did this to you. You were in the middle of doing something, and then they made you do something else, and then the product completely forgot about the thing that you were doing. This is a terrible user experience, and somebody who would intentionally design this is a bad designer and possibly a bad person. But most of the time people DON’T design this. It just kind of happens, because they don’t think through how all this stuff fits together. They just said, “Oh, creating lists is separate from putting songs onto lists” and then never thought about the very common use case where you might interrupt one thing to do the other thing.

The neat thing is that by using a task flow we can actually see it happening. We can visualize the fact that we sent them into the Add to List flow but they never made it to Add to List, so we’ve failed to satisfy the user’s intent. We need to do something once that List is Created. 

How about this?

step 4.png

There. Done. If a list is created successfully in the separate Create List module, we’re going to go ahead and do the Add to List process and then we’ll be all finished! Right?

Again, no. I don’t know why you keep falling for this. We’re not even halfway through. 

Take a look at that previous pic. Follow it through from the beginning. Now do it again. What happened the SECOND time through? That’s right. The second time, the answer to “Does a List Exist?” was “Yes!” Great! So, we’re just going to add the song to the list. 

But which list? What if somebody created multiple lists in the separate Create List module we just talked about? Do we pick for them? Or what if their second time through this flow is three weeks later and they want to create a new list. Are we going to suddenly take away their ability to create lists from inside the Add a Song flow because they created exactly one? That seems mean and unnecessary and a bit confusing.

How about we just add a step that lets them choose a list? Like this: 

step 5.png

That’s better. But it’s still not GOOD. Because while we’re letting them CHOOSE a list, we’re still not letting them create a new list on the second time through. Which means that they now get to choose from exactly one list and never add another one unless they go somewhere else entirely to create the list first, which again...why would you? Are you some sort of monster? 

No! You’re not a monster! I believe in you! I also believe in your ability to add an arrow from Choose List to Create List. This is an interesting thing that we can do from processes in task flows. 

step 8.png


That little line from Choose List to Create List is extremely important because it means that when you do design an interface, you MUST include in it the option for users to do multiple things. If you want to get very explicit, you can even put a little note next to the process that says which options a user might have selected while in that state.

In other words, once the user has said “I’d like to add this song to a playlist, please,” the interface will give them both a list of one or more playlists AND the option to add a new playlist. We’ve all seen this interface. This isn’t rocket science. And yet people still manage to forget to add stuff like this. 

Are we done yet? Well, we could be, but there’s one nice optional feature we may want to consider. Take a look at the following diagram. Can you tell what it’s doing? 

step 9.png


That’s right, in the Choose List branch, we’re checking to see if the song is already on the selected list. You’ll notice we don’t do the Is Duplicate check on the Create List branch. The reason is hopefully obvious - we just created a brand new list which means there aren’t ANY songs on it so this song CAN’T be a duplicate in that branch.

In this case, if the song isn’t already on the existing playlist, we go ahead and add it because that’s what the user asked us to do lo these many years ago when we started on our journey (ok, probably milliseconds, but it feels like a lifetime). If the song is already on the list the user has chosen, we’re going to give them the option to add it again. Who knows? Maybe they REALLY like that song. Maybe it’s meaningful to them. Maybe it’s the only thing that lets them FEEL ANYTHING ANYMORE...oh god please let’s just move on and stop judging me...I mean the user. Yes. The user. 

You can actually show this particular interaction of a couple of ways. This first way shows it with the process being Notify User of Duplicate, with the implication that the user would then have some sort of a choice in the interface to either go ahead and add it or not. Then the system would handle that. 

You can also show it this way:

step 9 alt.png

In this case, we’re using the fact that we can have multiple lines come out of a process to show that that particular state would have multiple options for the user. One of those options would be something along the lines of “Add Anyway” and the other would be something like “Never Mind.” You can choose to use the actual copy you’d use in the interface here or just positive/negative. Basically, do whatever’s clearest. 

There’s yet another option here (ok, there are lots, but this one is interesting). There’s an interface you could design that would do the following: check to see if the song is a duplicate, if it IS a duplicate, offer to let the user add it anyway, forget about the whole thing, OR add it to a different playlist. It’s slightly more complicated, both for the task flow and for the interface, so it may or may not be something you want to do.

This is a design decision that is honestly completely separate from the creation of the task flow, but going through each branch of a flow like this can help you uncover branches you might want to consider and estimate how much more complicated they’ll make the implementation. If you’re feeling it, try extending the task flow to include that option and see what happens.

What’s Missing? 

I keep saying that this is a simplified version. What did I leave out? Well, for one, I just ignored error states. You can’t do that in a real product. Error states are important. I’ll probably add a post on how to annotate these things with error states later. 

As I mentioned at the beginning, I also left out things like checking to see if a user is already logged in and has permission to do these things. Generally speaking, those would be their own modules because, again, they would be called from multiple places in the interface. If you wanted to be really picky, you’d probably have checks for “is logged in” and “is user permitted to add lists” or similar between Start and the “Does a List Exist” decision point.

And frankly, you DO want to be really picky. Because when you leave stuff like that out, what you end up with is more dead ends and more unhandled errors. How often have you tried to do something, been told you were logged out, gone to log in, and then had the system forget entirely about what you were trying to do? A lot? Yeah. That’s totally fixable, but you have to specify it, and task flows are a great way to do that. 

But Why????

This all brings us to an important point! I love task flows. That’s not the point. It’s just true. The point is that task flows are important for two very different things:

  • Understanding

  • Communication

Understanding

Task flows can help us understand the complexity required to get a user from intent to completion. This is still a pretty simplified version of a flow, and it’s already got a lot going on. It’s much easier to see this in task flow form than it is from a static mock up that just shows an innocent looking little Add to Playlist button with no further info.

Task flows can also help us spot possible dead ends in the interface like I showed you earlier with the Create List module. This helps us understand which parts of the interface we were ignoring, forgetting, or taking for granted.

Because task flows are UI independent, you can use them to figure out how a thing should work before you spend a lot of time thinking about how it should look. By doing this first, you’ll know which UI elements you’ll need and what input you’ll require from users. You’ll also know which other parts of the UI may need changing in order for this to work smoothly. 

Communication

But task flows are also used for communication. Mostly they’re used for communicating to engineers, because you’re basically writing pseudocode, and a well thought out task flow shows engineers that you have thought through all the use cases, edge cases, corner cases, and error cases that they’re going to have to deal with! Most of them really appreciate this kind of thing. They’re also great docs to make with engineers if that’s a thing you’re all into.

Because these are documents for understanding and communication, I strongly recommend that you go with whatever works best for you and your team in terms of how you design your flows. That’s why I showed you two different versions of the final step. Neither is technically “right.” I mean, one of them might be technically right, but I don’t care. The one that’s right for your team is the one that people understand and that communicates the correct information effectively.

Start with the basic shapes I showed you, and make sure your logic works. But once that’s true, feel free to annotate them with error messages, notes, and whatever else will help communicate the important stuff to the folks who need to know it.

And always remember! These sorts of internal deliverables are supposed to be helpful. They aren’t a required step. Don’t make a task flow if you don’t need one. Don’t spend any time making them pretty. Do keep them around later so that you can refer back to them when you’re wondering how a feature was supposed to function, though. And do share them with other folks in the org (QA! Content writers! Training! Whoever wants them!) who might benefit from knowing how your product actually works.

Like the post? Tweet it!

Quick note: if you’re a student or newish designer and you want to try to make one of the flows I mentioned above in the post, I’d be happy to give you feedback on it. Just send me a pdf or png of your work at laura@usersknow.com, and let me know what sort of feedback you’d like.

Read Part 2 in the series!


Where Product Ideas Come From

A few years ago, I wrote a book called Build Better Products. This is a lightly edited excerpt from Chapter 5. You can buy it at Rosenfeld Media. If you buy it before June 30th 2020 and use the code BBP20, you will get 20% off!

There is a persistent myth about Silicon Valley that great products spring fully formed from the brains of geniuses like Mark Zuckerberg or Steve Jobs. There is a constant search for The Next Big Thing, and venture capitalists spend their days trying to separate out great ideas from terrible ones based on a PowerPoint presentation and whether they happen to think that the founder has what it takes.

The truth is that ideas are free. Even worse, ideas are easy and fun to create, which means that, for any given product, you have far more “great” ideas than you could ever build and test.

Where Ideas Should Come From

Good ideas start with understanding. Specifically, you’ll come up with better ideas if you understand your users, your product, and your team’s capabilities. But they also come from being able to take that understanding and turn it into features that affect user behavior in a predictable way.

Again, I’m not saying that you should go out and ask users what they want and then build it. I’ve never said that. Nobody credible has ever said that. However, when companies are struggling because they are releasing feature after feature that users don’t care about, it’s always because they don’t understand their users’ actual needs. The teams are releasing things they think users should want. Or they’re releas- ing things that the team thinks are cool, which is a problem when the team isn’t made up of users of the product.

Any of the generative research methods from the previous chapters will help you understand your users—things like observing current or potential users, doing customer interviews, and spending time trying to spot problem patterns. Once you’ve done that research, though, you need to turn it into ideas for solutions. And this is tricky. Let me give you a ridiculous example.

Often, when I’m speaking to large groups of people, I ask who would love to lose 10 pounds overnight. More than a few hands generally go up. Great. I’ve spotted a problem pattern. This problem pattern
is supported by data, by the way, since the diet industry in the U.S. alone is tens of billions of dollars each year.

Next, I present people with my brilliant solution. They can lose 10 pounds overnight by cutting off a leg. I never get any takers.

Great problem identification. Suboptimal solution identification.

Of course, cutting off somebody’s leg will remove ten pounds. But most people want both of their legs. It’s a “solution” to the problem that doesn’t show any understanding of the real user need. If, instead of focusing on the metric of “10 pounds” we focused on what those 10 pounds represent, we’d be able to identify this particular solution as a nonstarter before we went out and invested in bone saws and operating rooms. I know. It’s a silly example.

But the truth is that companies often do release new products and features that nobody wants or uses. Even more frustrating, often multiple companies will create products intended to solve the same problem, but one will succeed where the others fail. Either the successful companies have a better understanding of the users for whom they are building their products, or they’re simply better at coming up with ideas that turn into appealing features.

Why do so many great product people generate such crappy ideas?

Why do so many great product people generate such crappy ideas? - Tweet this

Where Ideas (Unfortunately) Come From

Sadly, many product and feature ideas come from everywhere but users. In fact, the incredibly dismissive quote, “Users don’t know what they want,” is frequently used to prove why we shouldn’t be listening to users. That’s unfortunate, because we’re sure listening to everybody else.

Management

“Why exactly are we building this?” “Oh, the CEO wants it.”

Whether you’re at a startup or a large company, you’ve probably run into this problem. Unless you’re at the top of the org chart, there is somebody above you who has ideas about how the product should be built.

At startups, it’s generally founders who have an incredibly strong vision for what you should be building, and they’re not the least bit interested in hearing any argument about it. At large companies, the ideas and product suggestions can be handed down through several layers of management, which means that you don’t even know whom to convince it’s a terrible idea.

It’s maddening, and it creates environments where UX design- ers, researchers, and product managers feel disempowered. This is so common that it has an acronym: the HiPPO, or highest paid person’s opinion.

The problem is that even people who have had fabulous ideas in
the past or who have started very successful companies can have a failure. Remember the Amazon Fire Phone? The rumors are that the entire design and production were heavily influenced by Amazon CEO Jeff Bezos. It also appears to have ended with a $170 million write-down due to unsold inventory and a product that is no longer available for purchase. Jeff Bezos has had a lot of great ideas and built an amazingly successful company. But not every one of his ideas is going to be a winner.

Investors

Even founders don’t get to escape the constant flood of “great ideas” from people above them. Investment from venture capitalists often comes with advice, some of which might be useful, and all of which is hard to ignore.

As with management, the problem is not that investors always have bad ideas. The problem is that those investors are probably not your customers. They’re giving you feedback about a product that they wouldn’t use if they weren’t investing in it. Sometimes, it’s a product that they don’t use anyway. Those ideas are likely to lead you down a path of creating a product that is more appealing to your investors than your customers.

It’s hard to say no to ideas presented to you by a smart person who has given you a lot of money, but sometimes you have to, in order to build the thing that your real customers will love.

Coworkers

Not all ideas that come from within the company come from the
top. Your coworkers, teammates, and employees will all have ideas about the product you’re building as well. Some of these ideas will
be fantastic, particularly the ones from people who routinely connect with users. For example, your customer service department may have insights into problems that you won’t get from anywhere else.

On the other hand, an awful lot of ideas can be generated by people who have no real input from users and no particular insight into your product. Or worse, ideas can come from coworkers who have an agenda of their own that you may or may not understand.

Competitors

Another very typical way for companies to generate ideas is to take them from their competitors. Even startups will say things like, “Oh, we need to build feature x because that’s the only way to get to feature parity with Startup Y.” And everybody who has ever talked to a corporate marketing department has been told that a feature is needed “as a checkbox” for the market.

As with ideas that come from management, not all of these will be wrong or bad. The problem is that they’re also not particularly likely to be good.

There are two significant problems with taking feature ideas from your competitors. The first is that you have no idea how useful that feature is to your competitor’s users. In other words, that amazing thing that Salesforce just added to their CRM system that probably took them months or even years to build might have been a huge waste of their time. You don’t know why they added it or what their metrics are on it. It’s entirely possible that nobody’s using it, or that it hasn’t improved their bottom line at all. Even successful competitors can make bad product decisions, and blindly following them down that path is likely to lead to failure for you.

The second problem is that your competitor’s users aren’t, by definition, your users. What appeals to the people who use your competitors’ products won’t necessarily solve a problem for your customers. In

enterprise software, it’s not uncommon for companies to add features to products for a very specific customer, and if that customer isn’t yours, copying that feature won’t help your business at all.

Design Patterns

Sometimes ideas get stolen from completely different products altogether. This is how we end up with products that bill themselves as Tinder for Dogs or Uber for Ice Cream or Google+.

In other words, sometimes we get ideas from design patterns that have worked in other places. We see how wonderful the design pattern for Pinterest is, and we think how great it would be if our B2B file-sharing app worked the same way.

Adopting useful design patterns from other products is a totally reasonable thing to do, as long as you understand why they’re useful and how to use them appropriately. Simply taking a popular design pattern and applying it randomly to some other product is not a good idea.

Wouldn’t It Be Cool…

The most common place we get ideas seems to be “out of thin air.” These ideas tend to come from brainstorming sessions where people try to come up with “cool” ideas that “people” might like.

These ideas are often generated based on what’s possible or easy to produce rather than on an actual, observable problem. These ideas turn into dashboards with fancy visualizations of data that users don’t really need. Or else they turn into wearable devices that track things that nobody wants tracked. Did you know the Apple Watch lets you send your heartbeat to other Apple Watch users? ‘Nuff said.

Data

OK, this one seems a little weird for me to complain about, since I love data, and I think that metrics help us make much better product decisions. But that’s only true if you use them correctly.

A lot of teams these days are relying entirely on quantitative data to drive decisions. What happens is that a product manager will see a problem with a metric—maybe product pages on an ecommerce site aren’t converting well or churn is high in an enterprise SaaS product. That product manager will then come up with some theories about why the metric is bad and immediately move to ideating about

how to fix the problem. This is generally accompanied by the team building and testing “fixes” for the problem over and over until the problem goes away or, more frequently, the product manager gets told to move on to something else.

The problem with this approach is that the product manager in that case is trying to use metrics for something they don’t do very well— to understand why something is happening. Then the team builds an entire feature or fix around that “understanding.”

What this leads to is a lot more trial and error than is absolutely necessary. Sure, the data can tell you what users are doing with your products, but data can’t tell you why. Until you understand the “why,” you can’t make a good decision about how to fix the problem

What’s a Better Way?

For that, you really need to buy the book! Use BBP20 for 20% off until June 30th, 2020.








Product Team Mistakes, Part 2: Selecting, Estimating, and Prioritizing Features

A little while ago, I asked a lot of designers what product managers did that annoyed them the most. For the sake of fairness, I also asked PMs the same question about designers. I thought maybe I’d get a few responses and write up a quick blog post about some of the worst offenders. I’m going to be honest here. I dramatically underestimated the number of responses I’d get. 

This is the second in a series of blog posts covering some of the biggest mistakes product teams are making when it comes to collaborating and a few suggestions of how all of us might work together a little better. Read Part 1 here.

Two of the most important jobs of a PM are to prioritize and estimate features so that the team doesn’t try to work on everything at once. In many companies, this turns into a feature roadmap which probably deserves its own blog post at some point. 

Unfortunately, quite a few designers were unimpressed with their PM’s prioritization and estimation skills. Their complaints fell roughly into four categories:

  • Lack of understanding of the details of a product

  • Being too tactical and ignoring the longer term health of the product

  • Refusal to commit

  • Prioritizing internal stakeholder opinions over real user needs

Lack of understanding of the details

The first category was a pretty common complaint, especially on teams where PMs mostly just collect feature requests from stakeholders or customers. Designers complained that PMs would make wildly unreasonable roadmaps of features that didn’t really work together and that took far longer to build than expected because they simply didn’t understand the features or product in enough detail to make a good decision. 

This obviously caused problems, because it was the team (mostly designers, researchers, and engineers) who were on the hook for delivering badly scoped features on unreasonable timelines. But it’s not only a problem for missing deadlines. It’s also bad for figuring out which features to build next.  When PMs don’t have a deep understanding of what a feature is supposed to do and a rough idea of its complexity, it’s impossible to judge whether it’s more important than all of the other options. 

Designers also pointed out that these non-detail oriented PMs would give very vague direction about features, which made them impossible to design. PMs would request features from designers like “Add SMS support” or “Improve the onboarding” without giving any of the reasoning behind the request or explaining how the feature was supposed to help the user. They would then, inevitably, be upset when the designers returned something other than what was expected. 

One designer explained it as, “Usually there’s a generic, high level view of what the client/stakeholder needs but there’s rarely any real understanding or brief, and when questions are asked, the PM doesn’t know and assumes the UX fairies will figure it out. PMs rarely get involved in understanding the requirements to any helpful extent.”

Being too tactical

When PMs did understand the product well, they still made prioritization decisions that upset a lot of designers. Designers complained that PMs made small tactical decisions that focused on little changes or short term wins, and that they failed to make big bets or changes that might help improve the product over the long term.

One designer explained that the PM “didn’t seem to understand that the cumulation of all those little changes meant having to do major refactors later, which they also wouldn’t budget time for.” 

Interestingly, when asked what Designers did to irritate PMs, one of the biggest complaints was that designers tended to (in the estimation of the PMs) wildly overdesign things. Several PMs said that, even when asked for fairly small features, designers would return enormous, sweeping changes that would take months. 

It’s hard to say whether either of these views are entirely fair. PMs are often pressured to show immediate results, which can lead to short term thinking about changes. Also, there’s always a temptation to work on the smallest stuff first, since it seems odd to delay a quick fix until a giant feature has been completed. PMs also know that they’re the ones on the hook for justifying the need for making massive changes to something, and they can be worried that they’ll get halfway into a big project and have the company priorities change. Small changes can feel a lot safer, even if it means big, critical problems don’t ever get addressed. 

Designers, on the other hand, may be enticed by the idea of bigger changes that allow them more room to really make all the improvements they think are needed. On the other hand, I’ve certainly been in the situation where a seemingly easy design solution turned into something much larger than predicted because adding or moving a single small item meant that a dozen other things needed to be updated as well. It’s easy for design changes to snowball. 

Wherever the fault lies - and I have some theories that fault is generally pretty well distributed in cases like this - designers and PMs seem to have an awful lot of trouble deciding how much of the product should change at once. 

Refusal to commit

This next one surprised me a bit, but several designers claimed that they had worked with PMs who absolutely refused to commit to anything or write anything down, which seems...not great? One designer said that their PM wouldn’t write anything down “for fear of getting criticized” and another explained that the PM didn’t want to be seen as “committing” to any features in writing. I haven’t personally run into this, and I imagine that there’s something really off about the general culture of a company where this happens, but enough designers mentioned something similar that I think this isn’t restricted to one or two badly run places. 

There’s not much to say here other than that PMs should really write things down, and if they refuse to do so for fear of getting criticized, that’s a very bad sign about the company, the PM, or both. 

Prioritizing internal stakeholder opinions over real user needs

This last one is a huge problem. I’ve separated it out from a different designer complaint about PMs refusing to do external research, since that one deserves its own post, but a large number of the problems designers complained about are probably rooted in PMs prioritizing internal stakeholder requests over customer needs. 

For example, a huge percentage of the time that PMs explain that we have to “build feature x” rather than “improve metric y” it’s because somebody (generally somebody higher up the food chain) is demanding a specific feature for some reason. Often there will be a huge backlog of features that “have” to be built because the PM has gone to all of the internal stakeholders and simply asked what they want. 

Of course, it rarely feels like that to the PM. PMs can get tremendous pressure, especially in larger organizations, to build specific features for all sorts of reasons. Sales believes feature x will close a deal or marketing knows they need feature y because a competitor has it or the CEO saw an article in Forbes about new feature z and they are afraid of not having the hot new thing. 

Unfortunately, this tends to result in a random collection of features, which rarely makes a coherent product. Designers are the ones who are stuck trying to glue everything together without destroying the user experience, which can feel like an impossible task. 

How do you fix it?

There’s no one way to fix all of these problems other than working in an environment where PMs have the skill and authority to build products the correct way. 

As I mentioned in the related post, I don’t blame PMs for most of these problems. They’re systemic. PMs prioritize stakeholder input over users because, in many organizations, they’ll be punished if they don’t. PMs are afraid to commit things to writing because they know that they’ll be held responsible if things go wrong. They choose short term wins over longer term bets because they need to show results right now or because they’re worried that company strategy will change (again), and they don’t want to overcommit. 

The one thing that PMs do control and could improve immediately is getting a much better grasp of the details of the feature that the team is building and about to build. Obviously nobody has an encyclopedic knowledge of all the tiny interactions in every part of the product, but there’s no excuse for the amount of “high level thinking” I heard about from designers. 

You can’t make good prioritization decisions without understanding a lot of the details of your product and its features. Expecting your team to accurately design or estimate anything with no more of a brief than “Add SMS” or “Improve onboarding” is product malpractice. 

Fundamentally, most of this could be fixed by PMs not prioritizing features on a backlog, but instead, clearly communicating the metrics, user needs, and company goals to the team, and then working collaboratively to figure out the best options. Unfortunately, most teams don’t work like that yet, so I’d settle for PMs who understand their products, aren’t afraid to make big changes when necessary, and who listen more to user needs than to stakeholder demands. 

Want some good exercises to help work better as a team? Check out my book, Build Better Products. 

A Framework for Making Better Product Decisions

Recently, there has a been a big shift in the focus of Product teams from outputs to outcomes. In other words, some companies are starting to care a little bit less about the fact that a feature got shipped and a little bit more about whether that feature had a positive impact on user behavior and metrics. This is a great development. Shoving out 10 new features, none of which improve anything, seems like a fairly big waste of everybody’s time and money. 

Unfortunately, a lot of teams find it hard to understand whether what they’ve built improved anything important. This happens for a lot of reasons, like not having the right metrics, not being given time to measure things, or not knowing what the goal was in the first place. Even if they do know what improved, a lot of teams can’t tell you if the improvement was worth the effort.

These are all pretty big impediments to being able to measure outcomes and make better choices. By taking a slightly more disciplined approach to planning and review, teams can not only evaluate their work better, they can also improve their future decisions by identifying places where they’ve consistently made mistakes.

Step 1: Write Your Goals Clearly

If everybody just did this first step, their products would improve significantly. This is without question the most important thing you can do to make better decisions. Write down your goals and expectations before you start building. 

If everybody wrote down their goals and expectations before building, their products would improve significantly. -Tweet This

How you want to do this is really up to you. I know of at least a half dozen different styles for stating the expected outcomes of your feature or product. However you do it, you need to capture a few key pieces of information:

  • A description of the thing you’re building

  • How you expect the change you’re making to improve things and by when

  • How you’ll measure that improvement 

  • Which things you want to monitor to make sure they aren’t badly affected

  • What sort of investment will be required to make the change

  • Why you believe what you believe

The first thing on the list should be trivial, especially since you shouldn’t be making these until you’re fairly close to ready to start working on the new feature or project. These aren’t lists you make for every single possible feature you might build. These are well informed estimates for a project that is ready to go. If the project requires significant research and/or design work, you will likely want to do a short version of this before that starts and then update key parts when you have a better idea of what you’re going to be building. 

The second and third are tricky, because this is where you start laying things out as outcomes and benefits rather than just restating the feature. For example, you can’t say something like “Adding the ability to pay by mobile phone will let users pay by mobile phone.” You have to explain why that’s a good thing both for the user and the company. Something more like “Adding the ability to pay by mobile phone will allow a significant number of people who can not currently use our service to start using it.” 

The third one is even harder, since that’s where you have to explain what “significant” means and how you’ll measure it. Just measuring how many users pay with mobile phone doesn’t do the trick here. You’ll probably need to see how many new users pay that way and whether current users who switch end up spending more or less.  

And don’t forget the fourth item! In this example, you’d also need to monitor how many new users still paid the old way and overall sales in order to make sure you’re not cannibalizing a different payment method. Of course, you also need a method that lets you isolate your changes to make sure that sales didn’t go up for some unrelated reason like a big promotion or a sale on the day you release your new mobile phone payment feature. 

Don’t forget the second to last item - what sort of investment will be required to make the change. This doesn’t have to be stated in money. In fact, it’s pretty hard to do that in most companies. But once you’re at the point where you’re ready to start building something, you should have a decent idea of how long it should take and how many people or teams will be involved. 

Make sure that you’re not just talking about the time it takes to ship something. This should be how long it will take until it’s being actively used by people and you’re starting to see value from it. Those two things can be quite different, especially in B2B environments. If sales is telling you that you’ll get a big new client if you build a new feature, make sure that part of the investment includes educating clients about the new feature and training sales how to sell it, etc. Don’t forget to include any time research and design spent working on this before you had enough info to write everything down, and be sure to keep track of further research and design work as you build. 

The last item - why you believe what you believe - should be the easiest. What’s driving the decision to build this feature? Was there research that showed there was a huge potential market that couldn’t pay with a credit card? Did a specific person in the company insist that this was high priority? Did a salesperson say you couldn’t win a big account without it? Write it down! Be honest. “The CEO insisted,” is an acceptable thing to write here, but I do encourage you to try to understand why the CEO fell in love with the feature in the first place. 

If you do this correctly, over time, you’ll start to get a great view of which sorts of evidence is the most trustworthy and which sources provide the best feature or product ideas. I sometimes have an extra piece of information that I’ll record which is, “Who disagreed with this feature?” Not everybody is always onboard with every decision. Keep track. Sometimes you start to see patterns of people who will waste everybody’s time with their “great ideas,” and other times you’ll learn who’s needlessly pessimistic about every new change. 

Step 2: Post Release Retrospective

Your post release review happens as soon as the project is over. Please note that this does not replace regular product or engineering team retrospectives. If you do those, please carry on! 

For those of you who loathe all meetings on principle, please remain calm. I’m not adding a huge number of them - just two per project, where projects are defined as a fairly large feature or as a new version of a product or something of similar scope. You don’t need to do these for every button you add or piece of text you change. 

During this meeting, you’re going to review parts of your list and ask a few important questions: 

  • Did we end up building more or less what we thought we were going to build? 

    • If not, why not? 

    • What changed? 

    • What were the reasons for the changes?

  • How close were we to the original investment estimate? 

    • How were we wrong? (hint: you almost certainly underestimated wildly!)

      • Which specific costs were higher or lower than predicted? 

      • What took us significantly more or less time than we thought? 

    • Why were we wrong? 

You’re not going to be assessing whether your new thing meets expectations yet, because there’s almost never a realistic way to know that this early. All you’re doing is looking at what you expected to build, what you ended up building, how much you thought it would cost (in time/money/opportunity/whatever), and what it ended up costing. 

These are extremely important things to evaluate. If you find, as so many teams do, that everything ended up taking twice as long as you expected, that’s going to affect your company outcome. After all, would you have gone after that big new client if you’d known how much it would cost to build the feature they needed? Maybe! But you’ll never know unless you get a fairly accurate view of how long the project took, and this is easiest to do immediately after you think you’re finished. 

Step 3: Outcome Retrospective

And now, we wait. There are very few companies that can immediately judge whether a new feature has the impact they expected. All of those companies are big consumer properties with millions of transactions per day (or per second). Even then, there are all sorts of features that might require some time to measure - internal tools, features built for a small subset of the customer base, etc. 

That’s why in the original list, you need to specify when you think you’ll see the benefit by. Do you think it will take a few months to land the big new customer even after the feature they wanted is released? Fine, set that date ahead of time. Be generous with yourself, even. But be honest. 

If you think you’ll see a benefit in 6 months, check back in 6 months, but don’t keep extending the deadline if that customer still isn’t landed. It’s important for you to understand how long it can take to get the benefits you’re predicting. Hold the meeting, record the truth, and then feel free to set up a future date for an optional later retrospective if you think there’s still a chance you’ll get some benefit. 

On the appointed day, hold your next retrospective for the project. In this one, you’re going to go through the whole list, including the part you went through before. The questions you are trying to answer are:

  • How much has what we built changed since we thought it was “done”? 

    • Why did we change it?

    • How much more work was it?

    • How much did it end up adding to the original estimate?

  • Were there any benefits that we can prove came from the change we made? 

    • Why do we think we can attribute those changes to the new feature or product?

    • Are there other things we also did that might account for the improvements?

  • How realistic were we about the outcomes?

    • If we were wrong, why? 

  • Were there any negative consequences of the thing we built? 

    • What were they?

    • Why did they happen?

    • Why didn’t we anticipate and prevent them? 

If you were off on anything - investment, benefits, side effects, etc. - then you have to ask the most important question: What can we do differently next time to avoid these same mistakes? 

The most important question: What can we do differently next time to avoid these same mistakes? - Tweet This

This is the question I don’t hear people asking often enough. They just shrug their shoulders and move to the next thing. Inevitably, they end up underestimating the costs and overestimating the benefits again and again. It’s infuriating. 

Some Important Reminders

No Blame

There is a tendency, when we start asking questions like, “what went wrong,” to turn the conversation into a blamefest. You can’t do that here, or nobody will be honest, and if nobody’s honest, nobody will learn. 

If you blame people when things go wrong, nobody will be honest, and if nobody’s honest, nobody will learn. - Tweet This

These have to be free of blame. It’s not “who made this terrible decision?” The question we’re asking is, “how can we make better decisions in the future?” If you want more info on this, check out the concept of blameless post-mortems in engineering. That’s where I stole it from, anyway. 

Not Just Products

Another important thing to note is that, while I’ve been describing this as “building a product or feature,” this technique works great for any sort of big project or objective. Maybe you’re switching over to a new HR system that you think will reduce a specific kind of routine task your team has to do. Or maybe you’re adding a CRM and a new process for your sales team. Great! Write it down and do a couple of retros. Make sure you’re making good decisions. 

Remember to Iterate

One of the nice things about this method is that you may find the second retrospective is a great time to ask yourself what you should do next on the project. Did it live up to expectations or do even better than you imagined? Great! Maybe you should double down. Did it go wildly over budget and return nothing? Now’s a good time to figure out a way to fix it or kill it. 

It can be tough to convince execs to let you iterate on features that are “done.” It can also be incredibly easy to let non-performing features linger forever as zombies in your product. This is a fantastic breakpoint that encourages everybody to assess the feature objectively and take the right next step. 

Include the Whole Team

These are not meetings that you hold in secret or with only executives. They’re not about judging other people or finding ways to punish bad performers. They need to be run by the teams who are doing the work, and ideally, they include any stakeholders or decision makers. If you can’t get everybody actively involved, make sure that they at least see the results, especially if the right next step involves changing some important process. 

Anybody who can make decisions on a project should be given the information they need to determine whether their decisions were good. It’s the only way we learn to make better decisions. 

MAKE THE NECESSARY CHANGES

You will need to make some changes. The hardest part of this process is not adding extra meetings or writing down goals. The hardest part is learning from your mistakes and changing the environment that allowed them to happen. 

Every so often, go back over the notes from previous features. Are there patterns? Are there mistakes you’re making repeatedly? Are there “reasons” for building features or products that consistently underperform? Are you always overestimating the return on features and underestimating the cost? 

This is where you need to come up with systemic changes, and you can’t just write down, “BE SMARTER” because that never works. Trust me. You need to identify where the system went wrong and change it when possible. 

This part is hard and probably deserves its own blog post, but there’s lots of good info about this if you look at information about software post-mortems. 

Make It Yours

And, as with all advice, feel free to adapt or change this to suit your team’s needs. No advice is one size fits all, and no set of questions will be perfect for all projects. But all teams can benefit from stating their expectations clearly before starting a project and reviewing specific metrics once the project is finished. 

Interested in learning more? Check out a version of this in the Hypothesis Tracker section of my book, Build Better Products. 




Product Team Mistakes, Part 1: Communicating Company & User Needs

A little while ago, I asked a lot of designers what product managers did that annoyed them the most. For the sake of fairness, I also asked PMs about the most irritating habits of designers. I thought maybe I’d get a few responses and write up a quick blog post about some of the worst offenders. I’m going to be honest here. I dramatically underestimated the number of responses I’d get. 

This is the first in a series of blog posts covering some of the biggest mistakes product teams are making when it comes to collaborating and a few suggestions of how all of us might work together a little better. 

conflict

Understanding the Business

One of the biggest complaints was fascinating, because I heard it from both PMs and designers. Designers complained that PMs were bad at understanding and describing the business case for features they were requesting, while several PMs complained that designers didn’t understand the company’s business. 

Whatever you think about splitting up roles within a product team, most people agree that understanding and explaining the business side of a product is more the PM’s job than the designer’s. If they don’t understand the business or if they’re not sharing it in a way that helps everyone know what is being built, that’s a pretty serious problem. Still, let’s not let designers off the hook here. Everybody making changes to the product should understand the business needs, but it sounds like there are quite a few folks out there who don’t. 

Designers cited “vanity metrics” and “death through optimization” as examples of PMs who misused metrics or didn’t understand the business well. Instead of focusing on important KPIs that would reflect a better user experience or an improvement in things like revenue or retention, some PMs looked at numbers that didn’t mean much, like “time on site” or “total registered users.” Others spend a huge amount of time focusing on eking out tiny improvements with tweaks to things like button color or text, instead of making larger changes that might have a significant impact on usability or usefulness. 

Several designers also said that their PMs only seemed to have a very high level understanding of the product and didn’t have any real ideas about how to improve anything important. Designers felt that PMs should have a deep understanding of the business model and how changes to the product contributed to improving the bottom line and customer experience. Unfortunately, in many cases, PMs either didn’t have a firm grasp on the economics of the product or they couldn’t explain it in a way that designers understood. 

I’m not putting all the blame on the PM here. There were also PMs who complained that designers or researchers with whom they worked were actively hostile to learning about money. When the PMs tried to get designers to understand that their work had to improve metrics, some designers insisted that design was somehow different, and they were exempt from understanding the business model. 

Solutions vs Problems

Another frequent complaint was that PMs weren’t sharing user needs or problems with the team. In fact, in some cases, they were actively interfering with designers contacting users to learn this themselves, but I’ll explain more about that in a future blog post. 

So if PMs weren’t clearly articulating business needs or user problems, what were they doing? Demanding specific solutions, in most cases. Designers claimed that PMs would frequently come to them with a “solution” and simply ask the designer to “make it pretty” or in some cases “make it work.” 

In general, designers wanted PMs to share the goals of the user and the specific problem that needed to be solved. Instead the PMs would come up with a solution and deliver it as a specific feature request without explaining how the feature would help the users succeed. 

Even when the PMs did write problems statements, they would sometimes write them in the form of a solution. For example, “The user can’t place their order using SMS,” or “The user can’t use the advanced search tools to filter results by relevancy,” rather than explaining the context of the user and the user goals that weren’t being achieved. 

The problem with this approach is that it devalues the potential contribution of designers in a very specific way. It limits designers to mostly visual decisions, which is fine if you’re dealing with a designer who is only concerned with things like typography, color, and whitespace. But it’s a terrible waste of designers who have experience and training in things like information architecture, systems thinking, or research synthesis. 

Many user experience and product designers are used to coming up with creative solutions to user needs. They would much prefer being given a problem and asked to solve it rather than being given a feature and asked to draw it. 

Why does this happen? 

None of this is particularly unusual. In my experience working with teams, I’ve seen quite a few PMs who think their job is either to:

  • Come up with all the features and solve all the problems themselves, OR

  • Come up with a high level idea and then leave all the pesky details (like how it works and what metrics it will improve and how it might help users) to somebody else

The biggest culprits for all of these behaviors are the following (all of which probably deserve blog posts of their own, but who’s got that kind of time??): 

  • The CEO/Stakeholders really want a specific feature

  • The designers on the team are largely visual designers and aren’t used to being asked to come up with feature ideas 

  • PMs don’t have access to customer research and/or metrics (I know! It’s terrible.)

  • Sales says they need a key feature to be competitive with a big customer (VERY common in B2B)

  • Somebody falls in love with an idea or solution instead of starting from the business and user needs

  • PMs believe their job is to simply gather requirements from various stakeholders and then prioritize the list (to be fair, in some orgs, this is literally the PM’s job - although it rarely results in a good product)

How do you fix it? 

If you’re a PM reading this, you might think, “Oh, this doesn’t apply to me!” And you may very well be right! This was not a huge statistically significant survey of all types of teams. This was random whining on the Internet. But let’s just say there was no shortage of that whining, and there were some extremely strong patterns to be found, many of which I’ve seen in my own experience working with teams. 

The best way to find out if your team (not just the designers!) really understands the business is to ask them to explain it to you, preferably in a friendly, non-confrontational way. If they don’t get it, try not to automatically blame them or assume designers don’t care. Take a look at what you’re doing to make sure that everybody on the team understands the “why” behind everything you build. 

Also, make sure that you take a look at when you’re bringing designers into the planning process. Do you wait until you have a really good idea of the feature and how the feature will work before “handing things over to” design? Stop it! Bring design in at the point where you’ve figured out the business need and the general user problems and have your UX designers help figure out the right features. 

Of course, handing over nothing more than a single sentence or short paragraph can be just as bad. Saying “we need better search!” isn’t strategy! It isn’t high level thinking! It isn’t even particularly helpful! 

Instead, try framing what you need as a user and business problem. “Our data are showing that a lot of users are searching for x or y and then abandoning the product. Some initial research suggests that they may get really frustrated because they’re having a hard time understanding the results. What are some things we could do to fix this for them and help them get what they’re looking for.” 

And designers don’t get a pass on any of this either. If you refuse to understand the business needs or feel like getting insights from customers is somebody else’s problem, you’re not going to make very good decisions. 

If PMs come to you with a fully formed feature, try asking about the needs behind it. Maybe something like, “This looks really interesting. Can you tell me what sort of impact you expect this to have on the business? How about the user need? What is this solving for them?” If you don’t understand the answer (or if the PM doesn’t have one), hopefully you can work together to understand why the PM wanted the feature in the first place. 

Whether you’re a PM or a designer or anybody else working within a company to build something for humans, it’s critical that you understand why everybody is making the decisions they’re making. When we all understand what the company needs and what the users need, we can all make better decisions. If you’re interested in some exercises for working better as a team, you should check out Build Better Products. 

Read part two of the series here. Selecting, Estimating, and Prioritizing Features



Portfolios for Product Managers

Interviewing people is hard. If you’ve been a hiring manager for more than a few years, you’ve almost certainly run into somebody who seems great in the interview but can’t do the work. And you’ve probably missed out on some amazing talent who, for whatever reason, weren’t impressive in their interviews.

It’s tough on the other side, too. When you’re looking for a job as a product manager, you need to find a way to show people not just that you’re great in conversation but that you’re great at making things. Designers and engineers have a bit of an advantage here over PMs. They have portfolios and GitHub where they can showcase actual things that they’ve made.

But as a PM, what you make most often is decisions. How do you show that you made the right ones?

I know it’s not common, but I think that interviewing would be an awful lot easier on both sides if everybody created portfolios of their work, even PMs. Now, before you run out and grab a Dribbble account, let me give you a few guidelines for what a good portfolio could look like.

Don’t worry! You don’t have to be an artist (or even a designer) to make one. I’m not suggesting you create a beautiful showpiece, just something that will help people understand what sort of projects you’ve worked on and what your contribution was to the team.

A good portfolio should consist of one or more case studies of projects that you’ve personally worked on. The point of each case study is to show your particular contribution to achieving a necessary business goal.

State the Goal

When I’m working with designers on portfolios, I always ask them to make sure to state the goal of the project up front. While it can be tempting to jump right into a long description of the thing you made, that can pretty quickly turn into a boring list of features.

There’s a good chance that those features won’t be at all interesting or relevant to potential hiring managers. What IS interesting is how you made decisions about those features, because it shows the way you approach problems. To do that, make sure you share the problem you were trying to solve or the goal your team was trying to reach.

The type of goal you want to share might be a specific metric, like “increase revenue among a specific subset of users” or “decrease the amount of time users spend on tasks that could be automated.” It could also be something more exploratory like “find an adjacent market that could be an opportunity for an existing product expansion.” Or it could be an experiment to validate an existing idea.

Whatever the goal, make sure it’s not something proscriptive like “build a comments system.” Describing your goal that way makes it clear that you’re not thinking strategically about customer needs; you’re just taking orders from somebody else and executing on an existing plan. If you did build a comments section, tell your reader why on earth you’d want to do such a thing!

Show Your Work (and give credit to others)

I did say that you don’t need to be an artist, and that’s true. You do need to show your work deliverables though.

Products don’t jump fully formed from the heads of their creators. If you’re going directly from vague feature to fully implemented product, then you’re either skipping a bunch of steps or you’re completely removed from how your product is actually getting built.

Show the process that your team used to research, synthesize, ideate, communicate, and develop. If you did in depth user research, describe how you did it and what you learned. If you made sketches or mockups, show them. If you ran a design sprint, explain how and why and show the results. If you built a Powerpoint deck to sell the idea to execs, include a few slides. And if you actually built a functional prototype, by all means link to it (if you can).

This is your chance to show what you’re capable of. It’s also the chance to give credit to the rest of your team. If the designer made the mockups, credit them. If an engineer helped on the prototype, give them a shout out. Explain which parts you built yourself and where you collaborated with others, since very few of us are expected to build software on our own.

Explain Your Thinking

Here’s the most important part. You need to explain why you made the decisions you did. Try, if at all possible, to pick a project where the explanation isn’t just “because the senior execs told me to,” although I know that most PMs I know have lived some version of that project.

You already said what the goal was, and you showed what sorts of ideas you came up with and how you explored them. Now explain why you picked the direction you did.

What does the product do now do that it didn’t do before, and how did you expect it to help your company reach its goal? What made you sure that it would work? How did you prioritize the changes you made over others?

Share the Result (if possible!)

I know, it’s not always possible, but if you can share the results of the change, even at a high level, it can be incredibly helpful. Even if the feature didn’t perform in the way you’d hoped, sharing that and explaining what you did to fix it later can be a great example of how you’d recover from a similar setback at your new job.

If you don’t have (or aren’t allowed to disclose) numbers, consider describing user reaction to the feature or the general reception by the intended audience. Sharing results helps show that you care about the outcome of what you’re building and that you are focused on creating value for both your users and your company.

Get Started Now

So, how can you do this? There are a few options. One is to go with something fairly simple like this write up done by an ex-student of mine in UX. Alternatively, here are a couple examples made by students of Alex Cowan, with whom I’m teaching class in May 2019:

“But wait,” you say. Those are all student projects, not real life examples. That’s right. For those of you who don’t have projects that you can share publicly, presenting a student project or some sort of side hustle can still help show what you can do.





Learn to Be Technically Literate

Before the printing press was invented in the 15th century, the vast majority of people around the world were illiterate. Which makes sense. Before the printing press, there wasn’t much to read.

Miniature from Liège-Maastricht area, ca. 1300-1325 [Public domain], via Wikimedia Commons

Miniature from Liège-Maastricht area, ca. 1300-1325 [Public domain], via Wikimedia Commons

Even after the press, it took hundreds of years before a majority of people in Europe were literate, and more than half the world’s population still couldn’t read until more than halfway through the 20th century. There are some fascinating charts and graphs here, if you’re interested.

But this post isn’t about learning to read. It’s about technical literacy.

The first real computer programmers (apologies to Ada Lovelace) appeared in the late 1940s and early 1950s. Considering the fact that they had to write assembly code by punching holes in cards, it’s not surprising that there weren’t a whole lot of them. Now, 70 or so years later, there still aren’t a whole lot of them.

Evans Data Corporation, a company that studies these sorts of things, estimates that there are about 23 million software programming jobs in the world. Of course, this doesn’t include folks who can code but aren’t employed as computer programmers, and it’s impossible to know how many of those there are, but even if there were 10 times as many people who could code as there are people employed as computer programmers, that would still only be about 3% of the world’s population.

That’s...not great.

As software takes over the world, that’s a huge number of people who will have very little understanding of how anything they use works, and who will find it harder and harder to participate in building new products. It’s like we invented the printing press but only a tiny minority of people bothered to learn to read all the books that are being printed.

I truly don’t believe that everybody needs to learn to program or that we need to teach everybody to be a software developer. There are still plenty of jobs where you’ll never have to write or even read a line of code. But there are going to be fewer and fewer products that don’t have any sort of code associated with them. Smart home devices mean that refrigerators have their own mobile apps now and have to connect to smart hubs. Hotel keys have embedded chips. Sales teams use complicated CRM systems to manage relationships. Everything is sold online. The number of jobs that don’t require any technical ability is becoming vanishingly small.

Understanding how technical things work and interrelate is incredibly important. It can also be incredibly intimidating. There’s just so much to learn, and it can be hard to figure out where to start. Let’s say you want to build a simple mobile app. What language should you use? What development environment? How do you get started? What are packages and libraries? Do you need a server? What platforms should it run on? Why am I getting all of these cryptic errors??? I don’t blame people for taking one look and running for the hills. We don’t make it very easy to get started.

The Easy Way to Get Started

It doesn’t have to be this hard. The great thing about building stuff for the web is that you can get started it with nothing more than a text editor and a browser. With the knowledge of a few pretty simple things, you can start making things that work immediately.

So, what do you really need to know? The easiest place to start is still HTML and CSS. They give you a quick and obvious way to build things that people can use. Add in some basic Javascript, and suddenly you can build fully interactive prototypes and simple web applications and tools.

Even if you never ship a single line of code to a customer, being able to make something yourself can give you the confidence to start learning about other, more technical aspects of building products. If you work with engineers, it can give you a better understanding of some of the challenges they face. And, if you’re anything like me, it can make you feel amazing and powerful and a little bit like a wizard.

Why Is Innovation Hard at Big Companies?

I gave a keynote at Lean Startup Week 2017 in SF, early in November about some of the most common problems that large companies have implementing Lean Startup. Here's a little blog post I wrote up for the Business Talent Group blog summarizing the talk. 

If you’re responsible for launching new products, you’ve probably come across Eric Ries’s best-selling book, The Lean Startup. The Lean Startup methodology—which was created to help founders in Silicon Valley build better products—is incredibly useful for new companies and entrepreneurs who are trying to create innovative products and find product-market fit.

Over the years, however, Lean Startup has gone from something practiced by a few early adopters in very small companies to something that’s made inroads at organizations like GE, Toyota, and the federal government. And when you’re trying to introduce a big change at a large, established organization, you run into some very specific challenges.

I’ve spent years helping organizations of all sizes build new products and, in particular, create product development processes that help them continue to deliver. Here’s what I’ve learned about being an innovator at big companies—and helping them adopt the best of Lean Startup methodology. 

Read the whole post here >

Making Dashboards Useful and Usable

Products that people love and incorporate into their everyday lives tend to have two things in common: they’re both useful and usable. In other words, they’re things that people people want to use because they fill a particular need, and they’re things that people can use, because they’re not prohibitively complicated. 

Unfortunately, a lot of products fail one or both of these tests pretty badly. Either nobody wants to use them because they don’t see the value, or people try using them and give up in frustration. If you’re struggling with an analytics dashboard that isn’t getting the sort of usage you’d like, it’s worth conducting some research in order to find out if you’ve shipped something useless or unusable. Or both! 

Read more on the Logi Analytics blog >

The Right Deliverables

Once upon a time, I worked with a designer who refused to use any tool except Illustrator. Everything got made in Illustrator, whether he was building a visual design mockup, a task flow, or a discussion guide for a user research session (seriously). All of his deliverables were gorgeous.

He was also the slowest designer in history. Every single thing he did took five times as long as it would have taken anybody else, and much of it wasn’t very usable or useful. Pretty, though. 

While the visual interface, if your product has one, is an important part of the user experience, it’s not the entire user experience. And that means that the deliverables designers create to demonstrate a visual interface are not the only deliverables they need to make. 

So, if designers aren’t going to just produce pixel perfect Photoshop or Illustrator files, what sort of deliverables should they be creating? That's easy. They should make whatever they need to communicate the thing that needs to be communicated to the audience to whom they're communicating it.

Let’s get a little more specific. First, you have to understand the role that design deliverables play in the product development process. Designers, the good ones anyway, help to craft the experiences that users will have with a product, but they are rarely the ones to build the end product. Of course, there are designers who also code or build things, but even when this is true, there are typically other people on teams who also need to build things that the designer has specified. This means that the designer needs to create some set of documents or artifacts showing other people what to build.

But those aren’t the only deliverables that designers create.

In fact, designers make artifacts for three general reasons:

  • creation
  • communication
  • validation
Designers make artifacts to aid in creation, communication, and validation.          - Tweet This
Deliverables for Creation

The first type of artifact or deliverable that designers use are those that are helpful during the creation and ideation phase of design. These are tools that designers use by themselves or with small groups of people working together in order to get ideas out of their heads and into the physical world.

These sorts of deliverables might include things like sketches, ideas written on sticky notes, affinity maps, or dozens of other works in progress. In fact, many of the exercises in my upcoming book, Build Better Products (Rosenfeld Media ‘16), produce these sorts of deliverables - often bunches of sticky notes presented in some sort of framework.

These deliverables are great for communicating within a small team and making sure that everybody is talking about the same thing.

Created by the amazing students at my Stanford Continuing Studies class!

Created by the amazing students at my Stanford Continuing Studies class!

Ideally, these are done roughly and quickly, since they’re essentially temporary and meant to facilitate a meeting or work something out in a group. In fact, they're hardly deliverables, at all. You might prefer to call them artifacts of the design process, if that's what you're into. 

When done right, these artifacts are incredibly useful to the people creating them and completely useless for communicating anything to anybody who wasn’t there when they were made.

Even when what you’re creating is a visual design, the early stages of working out that design can be done quickly with the goal of iterating through lots of ideas to find the one you want to move forward with. Many visual designers start with mood boards or collections of colors and examples of typography and imagery in order to narrow down the exact look and feel they want to achieve. Again, these don't tend to be that helpful to anybody who isn’t the designer or at least extremely familiar with the visual design process. 

That’s fine. Artifacts that help the creation process are not meant to be used for communication to other people after the fact. There are entirely different deliverables for those.

Deliverables for Communication

Designers have to communicate their ideas to other people. Frequently, they have multiple audiences for that communication. For example, they may have to communicate designs or concepts to engineers who are going to build the product. They may have to communicate new feature ideas or directions to customers in order to get feedback. They may have to present things to managers or execs within the company.

Each of these different audiences can require different types of deliverables. In fact, any given audience might require different types of deliverables depending on the purpose of the communication.

Some executives simply can’t stand seeing any designs that aren’t complete, pixel perfect, and fully interactive. Others might be fine with an early sketch or even a task flow. Depending on what the engineers are building, they might prefer a well constructed task flow and a set of detailed user stories rather than a static Photoshop mockup. At other times, they might need a detailed visual design specification.

Task flows help you understand which screens need to exist and how a user might experience them.

Task flows help you understand which screens need to exist and how a user might experience them.

Deliverables for communication are a type of product in themselves. You need to understand who the intended audience is and what you want to communicate to them.

However, these deliverables do tend to be higher fidelity than deliverables that are only intended to help the designer create things. Because they are meant to communicate complicated ideas to people who may not have had anything to do with their creation, they have to be easier to understand than the type that are used merely to help a team think about the product together.

Deliverables for Validation

Another type of deliverable is used for validation. For example, a designer might produce a mockup or interactive prototype for the purpose of usability testing or to get feedback from a customer on a particular direction the team is thinking of going.

As with the deliverables for communication, what you produce here is going to depend on your audience. Some customers may be comfortable with low fidelity sketches or static pictures of features, but in general, these are going to be the most detailed deliverables a designer creates.

There are good reasons for this. When you’re usability testing a prototype, you can get much more realistic feedback from a prototype that behaves like the final product. If you’re trying to get feedback from a customer about a possible feature, there’s an excellent chance that the customer doesn’t have nearly as much information about the product as you do, so showing a more realistic example will help them to assess whether it’s an interesting feature.

Of course, the type of deliverable that you share here will also depends on what your goal is. Are you interested in getting usability feedback? You probably want an interactive prototype. Do you want to get customers excited about an upcoming feature? A high fidelity mockup might be better.

In general though, when sharing with outside people, avoid very high level sketches and concepts or boxes and arrows, since outsiders will have a difficult time understanding what they’re looking at.

Why On Earth Does This Matter?

Ok, that was a lot of time spent talking about things that designers make, and you may be wondering why it’s useful. It’s not just designers that create artifacts and deliverables. Everybody on your team creates them, all the time. You probably create documents, decks, spreadsheets, emails, and a dozen other types of documents every day.

Many teams have trouble communicating because they create the wrong types of artifacts for the job they’re trying to do. Thinking about what you’re trying to communicate and to whom will help you determine the correct level of specificity required from a deliverable.

Being thoughtful about this can save you a huge amount of time - both because it keeps you from creating overly detailed deliverables and because it prevents a lot of confusion that can happen when you share the wrong level of deliverable with a team member or customer.

Let me give you an example. Let’s say you’ve just had a meeting with your team where you’ve been sketching possible feature ideas on a board. Maybe you’ve gotten to the point where you all agree on a direction and even sketched some wireflows. In order to make sure that everybody remembers what you decided, you take a picture of the whiteboard and attach it to the story in your issue tracker.

Wireflow. Illustration by the awesome Kate Rutter from Build Better Products. 

Wireflow. Illustration by the awesome Kate Rutter from Build Better Products. 

Now imagine that a new engineer has just joined the team, and they weren’t in the meeting where you had that discussion. How useful will that picture be? Or, imagine that you share it with a customer who requested a similar feature in order to get feedback. Do you expect that the customer will be able to give you any sort of useful information about what you’re sharing? Very unlikely.

On the extreme other end, let’s say that you decide that you need to make a change to your app to allow users to opt out of certain notifications, and you need to share the specifics with the engineers who will be building it. Do you really need a fully prototyped, pixel perfect version of the feature? Or would a quick task flow and a sketch showing the changes be more useful to the engineer who is implementing the change?

Instead of always creating the exact same deliverables or whatever’s easiest - photos of whiteboards, powerpoint decks, Photoshop mockups - think about the audience for the deliverable and what you’re trying to communicate. And make sure that everybody on your team does, as well.

A Deliverables Framework

When you’re deciding on what sort of deliverable to create, you need to consider the following:

  • who is your audience?
  • what are you trying to communicate?
  • what sort of action do you want them to be able to take?

Honestly, the best way to find the right deliverables with your team is to sit down with them and understand how they work, and then experiment with different levels of fidelity. If they want something with significantly more detail than you think is justified, talk to them to understand why.

Are they trying to avoid a problem they’ve had in the past? Are they uncomfortable making decisions? Are they new and lacking any of the background? Are you just really bad at judging how much direction someone might need? All of these are possible. In other words, treat your deliverables like a design problem and your coworkers or customers as the users of those deliverables.

Once you understand your user and what you want to communicate to them, you need to have a firm idea of what sort of action you’re expecting from them. Are they supposed to implement a feature? Give usability feedback? Help you flesh out a concept? Estimate the amount of work something would take? Greenlight a new project? You should always give people what they need in order to give you the sort of feedback that you want.

In other words, don’t expect people to give you usability feedback on a high level sketch. There simply isn’t enough information there to respond. On the other hand, a developer shouldn’t need a fully functional prototype with complete visual design to start working on building a feature.

Once you know who you’re communicating with and what you’d like to get back, you need to pick the right deliverable. Deliverables can come in a dizzying array of styles. Let’s look at a framework for deciding which is right for what you need to do.  

The first question you need to ask about your deliverable is whether it needs some visual component. In other words, can it be described easily in words or a story? Or does it need a conceptual model? A sketch? A full visual mockup? This depends on what it’s conveying, of course.

Some deliverables are more visual than others.  

Some deliverables are more visual than others.  

Many things benefit from being shown visually. For example, explaining the layout of a page or the context of use for a product can be done much better with images than with words.

However, other things, like how often an email gets sent or which pricing plans to offer to different types of customers might be better shown in something like a spreadsheet or user story. When you’re communicating something, don’t just fall back on whatever you’re used to. Ask yourself, “could this be communicated better with an image or with a story?”  

If you do need something visual in your deliverables, the next question you need to ask is how high fidelity does it need to be.

Some visual deliverables are higher fidelity than others.

Some visual deliverables are higher fidelity than others.

You should be careful here. While high fidelity visual designs can make users or team members feel more like a product is “real,” having a very finished looking prototype can get you worse feedback on your idea. When confronted with pretty, finished looking mockups, people tend to focus on the look of them rather than whether they’re useful or usable. A gorgeous demo looks like a fait accompli, and you’ll rarely get good input other than surface level visual comments.

You also need to decide how interactive your deliverable should be.

Some deliverables are more interactive than others.

Some deliverables are more interactive than others.

Again, how interactive something needs to be depends on the sort of feedback you want. You’ll get significantly better usability feedback on an interactive prototype than you will on a sketch or a static mockup. People who can play with something that feels like a real product won’t have to imagine nearly as much as they would if they were looking at a set of user stories. That said, interactive prototypes take time to build, and sometimes you don’t need that level of feedback.

The last thing to consider when creating a deliverable of any sort is how maintainable it needs to be. Everybody forgets to take this into account, and it’s exactly the sort of thing that will cause you problems six months from now.

There are a lot of options between a Google Doc and a Poster.

There are a lot of options between a Google Doc and a Poster.

Imagine that you’re creating a set of personas. A huge number of teams seem to think that personas should be turned into posters that can be printed out and displayed around the office. That’s fine. It can make them very high visibility.

However, imagine that three months from now you realize something that needs to change about the persona. How do you do that? You’d have to remake all the posters. That’s not terrible for things that are fairly simple and aren’t likely to change constantly. But if you’re building a product prototype for usability testing, being able to quickly and constantly make changes as the product changes and features get added will save you an enormous amount of time in the long run. If you’re building an interactive demo to show to high level execs in order to get funding for a project, on the other hand, being able to update that demo later is significantly less important.

The most important thing to remember in all of this is that deliverables, like everything else in the product development process, are not one size fits all. All the deliverables you create have a purpose and a user. Understand those before you make anything at all. 

 

 

Good Enough

I've been thinking a lot about building better products lately. After all, I'm writing a book called Build Better Products, and it'll be out this autumn, so I haven't been thinking about anything else, really. 

The hard part about building better products is often knowing what better means. This is especially true when building MVPs or first versions or experiments or whatever you want to call that thing that you put out in the world in order to see if anybody might care about it.

It's even harder, I think, for designers, since many of them seem to have some sort of belief in the idea of Good Design as its own thing. Like there's a cosmic governing panel that decides whether something is Well Designed that is independent of whether the product makes users happy or makes money for the company.

And it's a tricky balance. I'm the first one to point out that if you release an incredibly crappy product, you're not going to learn anything other than that people don't like to use crappy products. We're already pretty clear on that. They don't. On the other hand, if you spend months tweaking the fonts and obsessing over every single word or loading the product up with unnecessary features, you're very likely to waste a huge amount of time and money building things nobody wants. 

How do you choose? I really wish I had a simple system that would allow you to decide when you've hit Good Enough every single time. "Should I release now? Y/N" Maybe someday we'll get that working. Until then, how about a weird analogy? 

Let's say you're cooking dinner, and the first step is to cut up some potatoes. Now, if you do a terrible job of it and hack them up into uneven pieces, it's probably going to ruin the dish and make it inedible because half of the potatoes will be raw and half will be overcooked and mushy, and the whole thing will be awful. So, instead, you take a little time and use good knife skills and cut the potatoes better. 

Now you have to decide how much time you're going to spend on the potatoes. 

Obviously, you could make them perfect - where perfect means all exactly the same size and shape and weight or cut into animal shapes or trapezoids or whatever. Molecular gastronomists have almost certainly discovered the golden ratio of surface area to interior, and I'm sure they'd love to tell you about it.  

But the more time you spend carving your potatoes into identically sized spheres, the less time you have for cooking the rest of the meal. And, frankly, having the potatoes perfect doesn't contribute that much to the overall meal. The end result of perfect potatoes may not be noticeable to the person eating the meal, and even if they did notice, it wouldn't increase their enjoyment of the meal enough to justify the time it took you to do it. They don't want perfect potatoes at midnight. They want good potatoes at 7pm. 

Remember, your goal isn't to make a perfect potato - whatever that means. Your goal is to make dinner. Preferably a dinner that the people eating it will enjoy. 

And when you think about it, you don't even know what "perfect" means. Is it that the potatoes are all the same size? Is it that they're all the same weight? Is it that they're all exactly 20% smaller in order to improve the texture? Does it depend on the person who is going to be eating the potatoes? (note: it does! If you're cooking them for me, they should be sweet potatoes, and just go ahead and fry them, thanks.)

You don't know what "perfect" means. - Tweet This

The great thing about cutting decent potatoes that are close to the same size but not worrying about more than that is that you can get the meal on the table, have your family eat it, and then make decisions about what you want to try next time. Maybe the potatoes would be better if you cut them a little smaller. Maybe the dish needs twice as many potatoes. Maybe you decide to substitute cauliflower for potatoes like a terrible person who doesn't deserve food. Maybe the problem isn't the potatoes at all. It's the spices. They're all wrong. You didn't see that coming, did you? 

You'll have a much better idea of what "better" means once you've shipped the meal and gotten feedback about what people liked and hated and what they left on the plate and why. 

Remember, the more time you spend obsessing about the damn potatoes, the less time you spend fixing important things like the fact that you forgot to make dessert. 

This is why I say that there's nothing wrong with aiming for "good enough," especially on the first few versions of something.  Good enough doesn't mean "too crappy to learn from" and it doesn't mean we're never improving it. It means we're getting something out that is good enough to get feedback on and that we can improve over time.

Moral of the story: THE POTATOES ARE FINE. NOBODY CARES ABOUT THE GOD DAMN POTATOES BUT YOU. I'M ORDERING A PIZZA. 

Whose Job is User Research? An Interview with Adam Nemeth

For this installment of my series on good user research practices within companies, I spoke with Adam Nemeth, UX strategist at UXStrategia.net, a UX research firm based in Hungary. He shared his perspectives on how teams should think about user research and when they should get help.

Why You Should Be Doing Research Yourself

There are a lot of reasons that your team should own and run user research. Most importantly, research is deeply connected to the product. “Design is essentially a plan for a product,” Adam says. “Whoever is responsible for bringing the product on the market should be held responsible for the research about the ‘true’ underlying problem, and also the for the research which validates whether the product is truly a solution for the problem.”

Because understanding your user’s problem and finding the right solution for it are so integral to the product design process, whoever is making product decisions needs to be held responsible for the research that produces this understanding.

Of course, understanding the problem and solution are important, but they don't make up the entirety of user research. Another critical part of your product is its ease of use. “Learn usability testing,” Adam says. “For God’s sake, just do it!” He even wrote a Medium post on how to do it well, in case you don’t have any experience with it.

Why You May Need Help Doing Research

But doing your own user research may be easier said than done. There are, unfortunately, some common roadblocks that teams run into, especially when starting to conduct research without a specialist. 

“Research is easy to mess up,” Adam warns. “It’s a world full of biases.” That’s very true. It can be extremely difficult for PMs, designers, or founders to get unbiased feedback on their own work without appropriate training.

Adam explains, “A startup CEO once had a hard time believing nobody needed their wonderful product, even when it came out as an unsolicited statement from the 4th participant in a row.” Research needs to be done by somebody who can deal with the bad news. Because, as Adam says, “Research bringing only good news is usually self-deception.”

Even if you’re not actively denying bad news, some research techniques can be hard to learn and perform well. “Usability testing is easy,” Adam explains. “but learning how to do interviews and field studies properly is much more difficult. You have to watch your own posture, your tone of voice, choose your words carefully, and be open to a world you know is filtered by your own assumptions, yet you must strive to get a glimpse behind them.”

To improve, Adam recommends recording yourself, not just to listen for the answers to your questions but to hear the questions themselves. You want to hear what you’re asking, how you’re asking it, and the sorts of responses it’s eliciting so that you can improve.

Also, some research is simply harder to do. It’s not all just usability studies and guerrilla coffee shop tests. Diary studies take a long time and a lot of attention. Hard to locate participants from very specific groups can make recruiting a huge task. Sometimes you’ll need to find an external person to manage bigger research projects, just to make sure that they get the attention they need.

Finally, even if you do have a research specialist on the team, they can run into an entirely different set of problems. Sadly, user researchers within organizations aren't always believed. In-house researchers can suffer from credibility issues, through no fault of their own. Companies sometimes need to hear the same results from an outside consultant in order to really listen.

So, Who Is Responsible for Research?

“It all boils down to these three factors,” Adam explains. “Who is able to argue the best for the user against a product choice? Who is able to notice a product error? Who is responsible for the product?” Whoever that person is, they’re the one who should be responsible for research.

Whoever is responsible for it, “Understanding users should be deeply embedded in the culture,” Adam says. “When I'm working with clients, I always facilitate studies. I don't believe in handed-in reports, no one will care about them and everyone will forget them soon. But making sure every single stakeholder participated in field studies, and that we watch usability test videos together - that is an experience which brings users closer to everyone within a product team.”

In other words, it doesn’t matter if research is being done by internal resources or external consultants. We’re all responsible for being involved in the research so that we can truly understand our users.

Whose Job is User Research? An Interview with Amy Santee

Almost every time I give a talk, I get asked how people can convince their companies to adopt more user research or pay attention to the research that’s being done. I’ve always given various answers, ranging from “quit and go to a better company” to “try to make research more inclusive,” but I realized that I was giving advice based on too small of a sample set.

So, earlier this year, I became obsessed with finding out who owns user research in functional teams. I’ve asked dozens of people on all sorts of different teams what makes research work for them, and I’m sharing their responses here. If you’re someone who is very familiar with how user research works on your team and would like to participate, please contact me.

Recently, I asked Amy Santee, an independent UX research consultant, some questions about who should own research. She trained as an anthropologist, and she’s been doing user research for several years for companies ranging from healthcare to insurance to hardware and mobile tech companies, so she’s seen what works and doesn’t work in a lot of different models.

Who Owns It

“For internal teams,” Amy explains, “researchers and designers who do research should ‘own’ the research process in the sense of being the go-to people responsible for driving its fundamental activities: planning and budgets, coordination, research design, recruiting participants, conducting sessions, and disseminating the results. It’s not so much ‘owning’ it but being the point person for getting things done.”

One of the themes I’ve seen so far in these interviews is the strong difference between the responsibility for conducting research and the ownership of the results. Regardless of who is responsible for getting research done, research results and participation should be owned by the whole team. The researcher or designer might be driving the process, but everybody else on the team should be along for the ride.

It’s not just direct members of the team who need to participate, either. “Stakeholders in design, product, engineering, business, marketing and other areas should share in this ownership to the greatest degree possible,” Amy says. “That’s why they’re called stakeholders – they have (or should have) a stake in the game when it comes to incorporating research into their processes and decision-making.”

To be clear, in Amy’s model, the stakeholders aren’t just interested in the outcomes. They should be active participants in the research. They can offer important perspectives from their respective business areas, and they should contribute to the research process itself by observing sessions, brainstorming ideas and solutions, and helping to synthesize the results.

The benefits of this sort of participatory research, Amy says, are clear. “The more this is done, the more value people will see in being involved, and the less the researcher needs to ‘own’ research by him or herself. Stakeholders might even learn how to do research so they don’t always have to rely on a single person or team to do it.”

Researching without Researchers

Of course, not all teams are lucky enough to have dedicated researchers or designers who are trained in user research methods. Amy has some suggestions for those who decide to do research without any experience on the team.

“My preference is for internal researchers because they have an understanding of the company and product from the inside,” she explains. “They are able to really get a sense for how research fits into the design process and business strategy. They can build relationships with other business areas and roles in order to figure out how research can bring the most value, when to do it, who to get involved, how to communicate most effectively, and possibly make more effective recommendations.”

That said, there are reasons to bring in experts from outside. “Training from an expert who has the right background and experience can help a team get started with the fundamentals and avoid the inappropriate execution of a research project (e.g., wrong methodology, misinformed research questions, etc.),” she says.

Sometimes, combining an external expert with internal trainees can even yield certain unexpected benefits. For example, outside consultants might have a fresher look at the questions the team should be asking. They might be able to bring up things that team members wouldn’t feel comfortable saying because they don’t have a bias or agenda. And, of course, they’ll typically have experience working with many different teams, so they’ll be able to spot patterns that less experienced researchers might not see.

Whether you’re working with internal experts or external coaches, the important thing is that the people on your team are engaged in the process. Making research a collaborative effort means more people in your company will learn from users, and that’s good for your product.

Learn More

For more information about Amy, check out her website, find her on Twitter, or connect with her on LinkedIn.

 

Whose Job is User Research? An Interview with Steve Portigal

This post appears in full on the Rosenfeld Media blog. 

Those of us who conduct user research as part of our jobs have made pretty big gains in recent years. I watched my first usability test in 1995 then spent a good portion of the 2000s trying to convince people that talking to users was an important part of designing and building great products. These days, when I talk to companies, the questions are less aboutwhy they should do user research and more about how they can do it better. Believe me, this feels like enormous progress.

Unfortunately, you still don’t see much agreement about who owns user research within companies. Whose job is it to make sure it happens? Who incorporates the findings into designs? Who makes sure that research isn’t just ignored? And what happens when you don’t have a qualified researcher available? These are tough questions, and many companies are still grappling with them.

So, I decided to talk to some people who have been dealing with these questions for a living. For this installment of the Whose Job is User Research blog series, I spoke with Steve Portigal, Principal at Portigal Consulting. He’s the author of Interviewing Users, which is a book you should read if you ever do any research on your own.

Steve has spent many years working with clients at large and small companies to conduct user research of all types. He also spends a lot of his time helping product teams get better at conducting their own research. Because he’s a consultant, he sees how a large number of companies structure their research processes, so I asked him to give me some advice.

Read the rest at Rosenfeld Media>

Whose Job is User Research? An Interview with Dorian Freeman

As part of my ongoing series about how user research is being done in organizations, I asked Dorian Freeman, the User Experience Lead at Harvard Web Publishing to answer a few questions. She shared her experiences working in UX design since the late ‘90s. See the rest of the series here

Owning Process vs. Owning Results

When I asked Dorian who on a product team should own user research, she explained that there is a difference between owning the process and owning the results. “The people who are accountable or responsible for research are the ones who oversee the researching, the synthesizing of the data, and the reporting on the findings,” she explained. “However, the data from the research is ‘owned’ really by everyone in the company. It should be accessible to everyone.”

 

This is an important distinction. Regardless of who is collecting the data, the output is important to the whole company  and should be accessible and used by everybody. Research, in itself, is not particularly valuable. It’s only the application of the research to product decisions that makes it valuable. Making the results of the research available to everybody means that, hopefully, more decisions will be made based on real user needs. 

External vs. Internal

A few folks in this series have talked about the benefits of having UX research done by members of the team, but Dorian called out one very important point about external researchers. “An external expert can often provide insights that are more credible to the leadership than an internal expert, which is a perception issue, but helpful in some cases.”

And she’s absolutely right. We may not always love that it’s true, but highly paid external consultants will sometimes be listened to where an employee won’t, even when they’re saying the same things.

On the other hand, for day to day research that is informing product team decisions, an in-house expert is often preferable. Dorian says, “Typically, the in-house expert researcher has more institutional knowledge which can speed up the process and provide more insight. In the ideal scenario, the product team should always have an internal expert researcher working closely with them.”

For teams that aren’t lucky enough to have an expert, Dorian recommends getting someone on the team to learn how to do it. “Understanding the people who use your product is essential,” she says. “If you’re not interviewing users, you’re not doing UX.”

Who Does What on Product Teams?

This is a post I wrote for the Rosenfeld Media blog in preparation for the PM+UX conference. Some of the research I did will also be covered in more detail in my upcoming book, Build Better Products

When we started talking about putting on the PM + UX Conference, the first thing we asked was, “What sorts of things should we talk about?” Since the folks at Rosenfeld Media are, not surprisingly, extremely user-centered, the obvious answer was, “We’re not sure. How about we do some research and find out what questions our attendees might have?” So we did.

The most interesting thing to me was that a lot of the questions people asked boiled down to “Who does what on a product team?” This was curious. I mean, we’re all working on product teams or we’ve worked on them in the past, right? Shouldn’t we know what our jobs are? Shouldn’t we know what everybody else is doing? Well, yes! We should! And yet… when I started to dig around and have conversations with people, I got very, very different answers about how things really worked.

That was odd. It turned out that, although we all have job titles like Product Manager or UX Designer, many of us have very different ideas about what it is that we’re supposed to actually do all day. Do designers talk to customers? What about PMs? Who decides what features go into a product? Who makes wireframes? Does anybody do usability testing? If not, could they please start?

Like any good team faced with more questions than they started with, we did some more research. Ok, first we had a couple of stiff drinks. Then we did some more research. I was volunteered to lead the way.

Read the rest at Rosenfeld Media>

Whose Job is User Research? An Interview with Susan Wilhite


As part of my ongoing series where I try to find out who is doing user research in organizations and who should be, I spoke with Susan Wilhite. Susan is a lead UX researcher. She was incredibly helpful in explaining how teams work best under different conditions. This is the third post in the series. 

Strategic vs Tactical

When we talk about ownership of the research function, we have to start with the type of research we’re doing and our goals for that research. “When research is mostly tactical,” Susan explains, “it should be owned by either product management or the design team, with the other being a key stakeholder.” 

Research that is intended to answer very specific, well understood questions, should be driven by people on the team who are asking those questions. For example, usability testing and other forms of tactical, evaluative research are going to be owned and driven by the people responsible for making decisions based on the results of the studies. 

Strategic research, on the other hand, like that done when a company is still developing its primary product or service or is branching into other lines of business, should be led with broad direction and budget from the VP of product or other high level stakeholder. This puts that leader in the best position to interpret UX research findings for their peers and champion those ideas into wider strategic decisions.

Most importantly though, generative and formative research is best done in-house rather than by people outside the company. “This research, unlike evaluative, has a very long shelf life. A tiny amount of information from strategic studies are communicated in a final report,” Susan explains. "Findings developed outside the company can be a lost opportunity to grow institutional knowledge within the org over time. Down the road this is important because findings from generative and formative research inform the most tactical research.” 

In other words, don’t pay vendors to acquire deep knowledge about your users unless you intend a long-term relationship those outside researchers. Understanding the product/service and users is a critical advantage, and the understanding that comes from conducting generative and formative research should be kept close to the vest.

Cross Functional Teams vs Silos

Recently, with the growth in Agile and Lean methodologies, we’ve seen a lot of companies break down functional silos in favor of cross functional teams. This can improve communication within the product team and help diminish the waste that happens when silos only communicate through deliverables. Susan points out some of advantages and disadvantages of doing away with the research team. 

“I have become a fan of the embedded research function,” Susan says. “Researchers are themselves tools, and as such are vastly more effective when given the chance to compound learnings and develop stakeholder trust in a circumscribed domain.” When a user researcher works within a product team, they become much more effective, since they’ll have a better understanding of the team’s real research needs. They can also build trust with the team, which will hopefully lead to less resistance to suggestions made by the researcher. 

On the other hand, embedded UX research has its own problems. “The hazard here is that product groups have varying budgets and sexiness – a researcher caught in a group not advancing fast from attention given by executives or the market can hobble a career.” Having a separate research team can prevent that by allowing researchers to circulate among teams and find areas of interest and groups where they work best. But still, it takes a very well managed corporate culture for a silo to work. As Susan warns about research teams in silos, “Success is uncommon.” 

Regardless of the company org chart, Susan encourages summing up and offering evolved thinking on strategic frameworks and tactical principles throughout the company. “I’d like to see twice-yearly off-sites where the org reviews what has been learned and workshops ideas from the product team at large,” she says. “Partly to remind the team of what has been learned and how we think we know it, but also to ponder aspirational research - what’s next.”

Whose Job is User Research? An Interview with Tomer Sharon

I'm interviewing researchers, designers, product managers, and other people in tech about their opinions on how user research should be done within companies. This is the second post in the series, and it appeared in full on the Rosenfeld Media blog. 

If you'd like to be featured as part of the series, contact me

As part of my ongoing series of posts where I try to get to the bottom of who owns user research, I reached out to Tomer Sharon, former Sr. User Experience Researcher for Google Search and now Head of UX at WeWork. He also wrote a book called It’s Our Research which addresses this exact topic, and his new book Validating Product Ideas is now available. He’ll be speaking at the upcoming Product Management + User Experience from Rosenfeld Media about ways teams can work together to learn more about their users.

I asked Tomer a few questions about his recent statement that UX at WeWork won’t have a research department and what suggestions he has for creating a team that conducts research well and uses it wisely.

 

Read the rest at Rosenfeld Media >