A project I’ve been working on recently should really be featured in all books on agile and lean design. I’ve found that many projects and clients have similar issues with their design processes, but it’s rare that a single project clearly demonstrates so many incredibly important tenets of lean UX.
So far, over the course of about a month, this one has driven home all of the following lessons:
- You shouldn’t be a slave to your metrics, especially when they’re telling you something surprising
- Qualitative testing is necessary for understanding why your metrics are what they are
- Iteration in the design process is critical
- Very small changes can make a huge difference
- You need to understand the actual experience your user is having with your product
- Don’t give up on your vision too early!
So, what happened?
I’m working with a very cool startup that is absolutely devoted to metrics. They A/B test and measure everything. When they first brought me on, the team went over some history of the product.
They originally had a particular flow for the product, but when they did qualitative testing, it became clear to them that users expected things to behave differently. The team designed a new version of the product that they felt addressed the issues that testers were having. They then built and released the new version.
It bombed. Well, perhaps that’s a bit strong, but it performed significantly worse than the original design in an A/B test.
They were surprised, but the data had spoken. The team reasonably assumed that their original reading of the qualitative testing results had been flawed, and that users really didn’t want this new flow. Since this had gone so badly, they then decided to hire a designer to help them come up with a better version.
That’s where I came in. The problem was, when they showed me the two different versions, I quite honestly could not understand why the new version had not crushed the old version in the test. It was obviously better!
I decided I needed to do some user testing to understand why their users would prefer what I felt was obviously the worse option. Was I misunderstanding the audience? Was the test flawed? Were they insane?
Over a couple of days, we observed quite a few test participants going through the two different new user flows. Because the company has users all over the country, we did a lot of remote testing and made use of usertesting.com so that we could observe people in various locations using their own computers.
The results were astonishing. It turned out that there was a strange bug that happened every time a session participant tried to use the new flow. There was one action that the user had to perform a few times, and every time they did it, there was a lag of a few seconds during which the user got no feedback. This led to multiple button clicks which led to a confusing experience when the clicks finally registered.
This problem hadn’t shown up in testing onsite, since the company's office had very powerful machines and a lot of bandwidth, so the team never saw the lag internally.
Instead of redesigning their entire site, I encouraged the company to fix this small bug to see if they could get a fair test and determine whether the new flow really was a better direction for the product than the old flow. Once we’d determined which flow was a better direction, we could make our design changes based on a better understanding of what their users really wanted.
The team fixed the bug and released the new experiment. While the final results still aren’t in, the new design is now trending higher than the old design, rather than the other way around. More importantly, it’s clear that fixing that one small bug had a significant impact on the performance of the new design.
So, let’s look at what we learned:
Don’t Be a Slave to Your Metrics
In this case, the A/B test was quite clearly showing the old version as the winner. But that simply didn’t tell the whole story. If the team had simply believed their metrics rather than investigating a surprising outcome, they would have abandoned a very promising direction for their product.
Even worse, they might have assumed that the new version failed because users didn’t like the new flow, which might have led them to make other similar decisions based on a flawed assumption.
Qualitative Testing is Necessary for Understanding Metrics
As I said in my Web 2.0 Expo talk, quantitative metrics only tell you what your problem is. To understand why you’re experiencing that problem, you need to observe users with your product in person.
Quantitative metrics tell you WHAT. Qualitative research tells you WHY.
Quantitative metrics tell you WHAT. Qualitative research tells you WHY.
Iteration in the Design Process is Critical
When you’re testing an entirely new version of something against an older version that may have already been optimized and tested, it’s often necessary to go through a few versions of the new design even to get it to parity with the old version.
Sometimes a new design wins the test right out of the gate, but not always. Spending a little bit of time understanding its performance and iterating on the design can deliver something much better in the long run.
Small Changes Make a Huge Difference
A very small bug nearly sunk a whole new design approach. The fix was to provide immediate feedback about a button press with a small “loading” animation. It wasn’t quite a one–line fix, but it was awfully close, and it made the difference between failure and success.
You Need to Understand Your User’s Experience
I’ve said it before. Just as you are not your user, your machine is not your user’s machine, and your bandwidth is not your user’s bandwidth, etc. This small problem would never have been found without direct observation of users in their natural habitats. Although, to be clear, we did it all remotely so we didn’t have to spend the money to fly all over the country.
Don’t Give Up On Your Vision Too Early!
Sometimes you’re wrong. It happens. You can think that something is going to improve your user’s experience and just be off-base. But don’t give up immediately. You had this vision for a reason, and you spent time designing and building something that you believed would solve a problem.
You owe it to yourself to dig a little bit deeper and understand why your product is failing, if only to help you learn what you did wrong and help you avoid the same mistake in the future.