Lean Analytics 15

Stage One: Empathy

At the outset, you’re spending your time discovering what’s important to people and being empathetic to their problems. You’re searching through listening. You’re digging for opportunity through caring about others. Right now, your job isn’t to prove you’re smart, or that you’ve found a solution. Your job is to get inside someone else’s head. That means discovering and validating a problem and then finding out whether your proposed solution to that problem is likely to work.
Metrics for the Empathy Stage In the Empathy stage, your focus is on gathering qualitative feedback, primarily through problem and solution interviews. Your goal is to find a problem worth solving and a solution that’s sufficiently good to garner early traction. You’re collecting this information by getting out of the building. If you haven’t gotten out of the building enough—and spoken to at least 15 people at each interviewing stage—you should be very concerned about rushing ahead. Early on, you’ll keep copious notes. Later, you might score the interviews to keep track of which needs and solutions were of the greatest interest, because this will tell you what features need to be in your minimum viable product (MVP).

this Is the Best Idea I’ve Ever Had! (or, How to Discover Problems Worth Solving) Entrepreneurs are always coming up with ideas. While some people say “ideas are easy,” that’s not entirely true. Coming up with an idea is hard. Coming up with a good idea is harder. Coming up with an idea that you go out and validate to the point where it makes sense to build something is really, really hard. Problem (or idea) discovery often starts with listening. After all, people love to complain about their problems. But take their complaining with a grain of salt. You need to listen actively, almost aggressively, for the nugget of truth or the underlying pattern. Big, lucrative startups are often the result of wildly audacious solutions to problems people didn’t realize they had. Discovery is the muse that launches startups. In some cases, you won’t need to discover a problem. It will be the reason you founded a startup in the first place. This is particularly true for enterprisefocused initiatives or startup efforts that happen within a willing host company. As an intrapreneur, you may have noticed a pattern in customer support issues that suggests the need for a new product. If you’re selling to enterprises, maybe you were an end user who realized something was missing, or a former employee of a vendor who saw an opportunity. Your idea is simply a starting point. You should let it marinate awhile before jumping into it. We’re huge believers in doing things quickly, but there’s a difference between focused speed in a smart direction and being ridiculously hasty. Your first instinct will be to talk to your friends. This isn’t a genuine or measurable part of Lean Startup, but it’s not a bad first step. Ideally, you’ve got a group of friends, or trusted advisors, who are in and around the relevant space of interest, from whom you can get a quick reality check. Your trusted friends and advisors will give you their gut reaction (see—we don’t hate guts at all!), and if they’re not pandering to you or trying to avoid hurting your feelings, then you’ll get at least semi-honest feedback. You may also get some insight that you hadn’t thought of: information about competitors, target markets, different takes on the idea, and so on. This quick “sniff test” is an excellent investment for the first few days after you get an idea, before committing any formal work to it. If the idea passes the sniff test, it’s time to apply the Lean Startup process.

Finding a Problem to Fix (or, How to Validate a Problem) The goal of the first Lean stage is to decide whether the problem is painful enough for enough people and to learn how they are currently trying to solve it. Let’s break down what that means: The problem is painful enough People are full of inertia. You want them to act, and you want them to do so in a way that helps your business. This requires enough discomfort with their situation that they actually do what you want—signing up, paying your price, etc.
Enough people care Solving a problem for one person is called consulting. You need an addressable market. Marketers want audiences that are homogeneous within (that is, members of the segment have things in common to which you can appeal) and heterogeneous between (that is, you can segment and target each market segment in a focused manner with a tailored message).
They’re already trying to solve it If the problem is real and known, people are dealing with it somehow. Maybe they’re doing something manually, because they don’t have a better way. The current solution, whatever it is, will be your biggest competitor at first, because it’s the path of least resistance for people.
Note that in some cases, your market won’t know it has a problem. Before the Walkman, the minivan, or the tablet computer, people didn’t know they had a need—indeed, Apple’s ill-fated Newton a decade before the iPad showed that the need didn’t exist. In this case, rather than just testing for a problem people know they have, you’re also interested in what it takes to make them aware of the problem. If you’re going to have to “plow the snow” in your market, you want to know how much effort it will be so you can factor that into your business models. You need to validate each of these (and a few more things too) before moving to the next stage. And analytics plays a key role in doing so. Initially, as we’ve pointed out, you’ll use qualitative metrics to measure whether or not the problem you’ve identified is worth pursuing. You start this process by conducting problem interviews with prospective customers.

We suggest that you speak with 15 prospective customers to start. After the first handful of interviews, you’ll likely see patterns emerging already. Don’t stop talking to people. Once you get to 15 interviews, you should have the validation (or invalidation) that you need to help clarify the next steps. If you can’t find 15 people to talk to, well, imagine how hard it’s going to be to sell to them. So suck it up and get out of the office. Otherwise, you’re wasting time and money building something nobody wants. While the data you’re collecting at this stage is qualitative, it has to be material enough so that you can honestly say, “Yes, this problem is painful enough that I should go ahead and build a solution.” One customer doesn’t make a market. You can’t speak with a few people, get generic positive feedback, and decide it’s worth jumping in. Pattern | Signs you’ve Found a Problem Worth tackling The key to qualitative data is patterns and pattern recognition. Here are a few positive patterns to look out for when interviewing people: • They want to pay you right away. • They’re actively trying to (or have tried to) solve the problem in question. • They talk a lot and ask a lot of questions demonstrating a passion for the problem. • They lean forward and are animated (positive body language). Here are a few negative patterns to look out for: • They’re distracted. • They talk a lot, but it’s not about the problem or the issues at hand (they’re rambling). • Their shoulders are slumped or they’re slouching in their chairs (negative body language).
At the end of the problem interviews, it’s time for a gut check. Ask yourself: “am I prepared to spend the next five years of my life doing nothing else but solving the problem in question?”

Pattern | Running Lean and How to Conduct a good Interview Ash Maurya is one of the leaders in the Lean Startup movement. He’s experimented and documented Lean Startup practices for several years with his own startups, and he wrote a great book called Running Lean (O’Reilly). It’s a good complement to this book. Ash describes a prescriptive, systematic approach for interviewing people during the early stages of the Lean Startup process. For starters, you need to conduct problem interviews. You decouple the solution (which we know you’re excited about!) from the problem, and focus on the problem alone. The goal is to find a problem worth solving. And remember, customers are tired of solutions—they get pitched continually on magical doohickeys that will make their lives easier. But most of the time, the people pitching don’t understand the customers’ real problems. Here are some tips from Ash and Running Lean for conducting good interviews: • Aim for face-to-face interviews. You not only want to hear what people are saying, you also want to see how they’re saying it. People are generally much less distracted when meeting face-to-face, so you’ll get a higher quality of response. • Pick a neutral location. If you go to a subject’s office, it’s going to feel more like a sales pitch. Find a coffee shop or something casual. • Avoid recording interviews. Ash notes that in his experience, subjects get more self-conscious if the interview is being recorded, and the quality of interviews subsequently drops. • Make sure you have a script. While you may adjust the script a bit over time, you’re not tweaking it constantly in order to “get the answers you want” or rig anything in your favor. You have to stay honest throughout the process. The script is probably the hardest thing to do well. Early on, you may not even be sure what questions to ask. In fact, that’s why surveys don’t work at an early stage—you just don’t know what to ask in order to collect meaningful information. But a script will give you enough consistency from interview to interview that you can compare notes.

Most of the problem interview is fairly open-ended. You want to give subjects the opportunity to tell you whatever they want to, and you want them to do so in a comfortable free-form manner. In Running Lean, Ash provides a very good breakdown of interview scripts. We’ve summarized the problem interview script as follows: • Briefly set the stage for how the interview works. This is the point where you tell the interviewee what you’re going to tell (or ask) her. Highlight the goals of the interview to put the interviewee in the right frame of mind. • Test the customer segment by collecting demographics. Ask the subject some basic questions to learn more about her and understand what market segment she represents. These questions depend a great deal on the types of people you speak to. Ultimately, you want to learn about their business or their lifestyle (in the context of the problems you’re proposing to solve), and learn more about their role. • Set the problem context by telling a story. Connect with the subject by walking her through how you identified the problems you’re hoping to solve, and why you think these problems matter. If you’re scratching your own itch, this will be a lot easier. If you don’t understand the problems clearly, or you don’t have good hypotheses for the problems you’re looking to solve, it’s going to show at this point. • Test the problem by getting the subject to rank the problems. Restate the problems you’ve described and ask the subject to rank them in order of importance. Don’t dig too deeply, but make sure to ask her if there are other related problems that you didn’t touch on. • Test the solution. Explore the subject’s worldview. Hand things over to the customer and listen. Go through each problem—in the order the subject ranked them—and ask the subject how she solves it today. There’s no more script. Just let the subject talk. This is the point in the interview when you can really do a qualitative assessment of whether or not you’ve found problems worth solving. It may go well, with subjects begging you to solve the problem, or you might get a resounding “meh,” in which case there’s a clear disconnect between your business and the real world.

• Ask for something now that you’re done. You don’t want to discuss your solution at length here, because it will feel too much like a sales call, but you should use a high-level pitch to keep the subject excited. Ideally, you want her to agree to do a solution interview with you when you’re ready with something to show—these initial subjects can become your first customers—and you want her to refer other people like her so you can do more interviews.
As you can tell, there’s a lot that goes into conducting a good interview. You won’t be great at it the first time, but that’s OK. Hopefully some of what we’ve covered here and other resources will give you the tools you need. Get a good script in place, practice it, and get out there as quickly as you can. After a handful of interviews, you’ll be very comfortable with the process and you’ll start seeing trends and collecting information that’s incredibly valuable. You’ll also be immeasurably better at stating the problem clearly and succinctly, and you’ll collect anecdotes that will help with blogger outreach, investor discussions, and marketing collateral. Qualitative metrics are all about trends. You’re trying to tease out the truth by identifying patterns in people’s feedback. You have to be an exceptionally good listener, at once empathetic and dispassionate. You have to be a great detective, chasing the “red threads” of the underlying narrative, the commonalities between multiple interviewees that suggest the right direction. Ultimately, those patterns become the things you test quantitatively, at scale. You’re looking for hypotheses. The reality of qualitative metrics is that they turn wild hunches—your gut instinct, that nagging feeling in the back of your mind—into educated guesses you can run with. Unfortunately, because they’re subjective and gathered interactively, qualitative metrics are the ones that are easiest to fake. While quantitative metrics can be wrong, they don’t lie. You might be collecting the wrong numbers, making statistical errors, or misinterpreting the results, but the raw data itself is right. Qualitative metrics are notoriously easy for you to bias. If you’re not ruthlessly honest, you’ll hear what you want to hear in interviews. We love to believe what we already believe— and our subjects love to agree with us.

Pattern | How to Avoid Leading the Witness We’re a weak, shallow species. Human beings tend to tell you what they think you want to hear. We go along with the herd and side with the majority. This has disastrous effects on the results you get from respondents: you don’t want to make something nobody wants, but everybody lies about wanting it. What’s a founder to do? You can’t change people’s fundamental nature. Response bias is a wellunderstood type of cognitive bias, exploited by political campaigners to get the answer they want by leading the witness (this is known as push polling). You can, however, do four things: don’t tip your hand, make the question real, keep digging, and look for other clues.
Don’t tip your Hand We’re surprisingly good at figuring out what someone else wants from us. The people you interview will do everything they can, at a subconscious level, to guess what you want them to say. They’ll pick up on a variety of cues. • Biased wording, such as “do you agree that…” is one such cue. This leads to an effect called acquiescence bias, where a respondent will try to agree with the positive statement. You can get around this by asking people the opposite of what you’re hoping they’ll say—if they are willing to disagree with you in order to express their need for a particular solution, that’s a stronger signal that you’ve found a problem worth solving. • This is one reason why, early in the customer development process, open-ended questions are useful: they color the answers less and give the respondent a chance to ramble. • Preconceptions are another strong influencer. If the subject knows things about you, he’ll likely go along with them. For example, he’ll answer more positively to questions on the need for environmental protection if he knows you’re a vegetarian. The fewer things he knows about you, the less he’ll be able to skew things. Anonymity can be a useful asset here; this is a big reason to keep your mouth shut and let him talk, and to work from a standardized script. • Other social cues come from appearance. Everything in your demeanor gives the respondent clues about how to answer you. These days, it’s probably hard for you to hide details about yourself,

since we live fairly transparently online and you may have met your respondents through social networks. But you’ll get better data if you dress blandly and act in a manner that doesn’t take strong positions or give off signals.
Make the Questions Real One way to get the real answer is to make the person uncomfortable.
People only get really interesting when they start to rattle the bars of their cages. Alain de Botton, author and philosopher
Next time you’re interviewing someone, instead of asking “Would you use this product?” (and getting a meaningless, but well-intentioned, “yes”), ask for a $100 pre-order payment. You’ll likely get a resounding “no.” And that’s where the interesting stuff starts. Asking someone for money will definitely rattle her cage. Will it make both of you uncomfortable? Absolutely. Should you care? Not if you’re interested in building something people will actually pay for. The more concrete you can make the question, the more real the answer. Get subjects to purchase, rather than indicating preference. Ask them to open their wallets. Get the names of five friends they’re sure will use the product, and request introductions. Suddenly, they’re invested. There’s a real cost to acting on your behalf. This discomfort will quickly wash away the need to be liked, and will show you how people really feel. One other trick to overcome a subject’s desire to please an interviewer is to ask her how her friends would act. Asking “Do you smoke pot?” might make someone answer untruthfully to avoid moral criticism, but asking “What percentage of your friends smoke pot?” is likely to get an accurate answer that still reflects the person’s perception of the overall population.
Keep Digging A great trick for customer development interviews is to ask “why?” three times. It might make you sound like a two-year-old, but it works. Ask a question; wait for the person to finish. Pause for three seconds (which signals to her that you’re listening, and also makes sure she was really done). Then ask why.

By asking “why?” several times, you force a respondent to explain the reasoning behind a statement. Often, the reasoning will be inconsistent or contradictory. That’s good—it means you’ve identified the gap between what people say they will do and what they will actually do. As an entrepreneur, you care about the latter; it’s hard to convince people to act against their inner, moral compasses. “Anyone who values truth,” says Jonathan Haidt, author of The Righteous Mind (Pantheon), “should stop worshipping reason.” The reasoning of your interview subjects is far less interesting than their true beliefs and motivations. You can also take a cue from interrogators and leave lingering, uncomfortable silences in the interview—your subject is likely to fill that empty air with useful, relevant insights or colorful anecdotes that can reveal a lot about her problems and needs.
Look for Other Clues Much of what people say isn’t verbal. While the amount of nonverbal communication has been widely overstated in popular research, body language often conveys feelings and emotions more than words do. Nervous tics and “tells” can reveal when someone is uncomfortable with a statement, or looking to another person for authority, for example. When you’re interviewing someone, you need to be directly engaged with that person. Have a colleague tag along and take notes with you, and ask him to watch for nonverbal signals as well. This will help you build a bond with the subject and focus on her answers, and still capture important subliminal messages. And never forget to ask the “Columbo” question. Like Peter Falk’s TV detective, save one disarming, unexpected question for the very end, after you’ve said your goodbyes. This will often catch people off guard, and can be used to confirm or repudiate something significant they’ve said in the interview.
Convergent and Divergent Problem Interviews As we wrote this book, we tested out several ideas on entrepreneurs and blog readers. One of the more contentious ideas we discussed was that of scoring problem validation interviews. Several readers felt that this was a good idea, allowing them to understand how well their discovery of needs was proceeding and to rate the demand for a solution. Others protested,

sometimes vociferously: scoring was a bad idea, because it interfered with the open, speculative nature of this stage. We’ll share our scoring framework later in the book. First, however, we’d like to propose a compromise: problem validation can actually happen in two distinct stages. While the goal of a problem interview is always the same—decide if you have enough information and confidence to move to the next stage—the tactics to achieve this do vary. In Ash Maurya’s framework from earlier in this chapter, he suggests telling a story first to create context around the problem. Then he suggests introducing more specific problems and asking interviewees to rank them. This is a convergent approach: it’s directed, focused, and intended to quantify the urgency and prevalence of the problems, so you can compare the many issues you’ve identified. In a convergent problem interview, you’re zeroing in on specifics—and while you want interviewees to speak freely, and the interviews aren’t heavily structured—you’re not on a fishing expedition with no idea what you’re fishing for. A convergent problem interview gives you a clear course of action at the risk of focusing too narrowly on the problems that you think matter, rather than freeing interviewees to identify other problems that may be more important to them. For example, you might steer subjects back to your line of questioning at the expense of having them reveal an unexpected adjacent market or need. On the other hand, a divergent problem interview is much more speculative, intended to broaden your search for something useful you might go build. In this type of problem interview, you’re discussing a big problem space (healthcare, task management, transportation, booking a vacation, etc.) with interviewees, and letting them tell you what problems they have. You’re not suggesting problems and asking them to rank them. You probably have a problem or two that you’re looking to identify, and you’ll measure the success of the interviews, in part, by how often interviewees mention those problems (without you having done so first). The risk with a divergent problem interview is that you venture too broadly on too many issues and never get interviewees to focus. Divergent problem interviews run the risk of giving you too many problems, or not enough similar problems, and no clarity on what to do next. It takes practice to strike the right balance when doing interviews. On the one hand, you want to give interviewees the opportunity to tell you what they want, but you have to be ready to focus them when you think you’ve

found something worthwhile. At the same time, you shouldn’t hammer away at the problems you’re presenting if they’re not resonating. If you’re just starting out, and really focused on an exploratory exercise, then try a divergent problem interview. Scoring in this case is less relevant. Collect initial feedback and see how many of the problems people freely described to you match up. If that goes well, you can move to convergent problem interviews with other people and see if the problems resonate at a larger scale.
How Do I Know If the Problem Is Really Painful Enough? While the data you’ve collected to this point is qualitative, there are ways of helping you quantify that data to make an informed decision on whether you want to move forward or not. Ultimately, the One Metric That Matters here is pain—specifically, your interviewees’ pain as it pertains to the problems you’ve shared with them. So how can you measure pain? A simple approach is to score your problem interviews. This is not perfectly scientific; your scoring will be somewhat arbitrary, but if you have someone assisting you during the interviews and taking good notes it should be possible to score things consistently and get value out of this exercise. There are a few criteria you can score against based on the questions you’ve asked in a convergent problem interview. Each answer has a weight; by adding the results up, you’ll have a sense of where you stand. After completing each interview, ask yourself the following questions.

Even in a convergent problem interview where you’ve focused on a specific set of problems, the interview is open-ended enough to allow interviewees to discuss other issues. That’s completely fine, and is extremely important. There’s nothing that says the problems you’ve presented are the right ones—that’s precisely what you’re trying to measure and justify. So stay open-minded throughout the process.

For the purposes of scoring the interview and measuring pain, a bad score means the interview is a failure—the interviewee’s pain with the problems you’re considering isn’t substantial enough if she spends all her time talking about other problems she has. A failed interview is OK; it may lead you to something even more interesting and save you a lot of heartache.

The more effort the interviewee has put into trying to solve the problems you’re discussing, the better.

Ideally, your interviewees were completely engaged in the process: listening, talking (being animated is a good thing), leaning forward, and so on. After enough interviews you’ll know the difference between someone who’s focused and engaged, and someone who is not. The point totals for this question are lower than the previous two. For one, engagement in an interview is harder to measure; it’s more subjective than the other questions. We also don’t want to weigh engagement in the interview as heavily—it’s just not as important. Someone may seem somewhat disengaged but has spent the last five years trying to solve the problems you’re discussing. That’s someone with a lot of pain . . . maybe he’s just easily distracted.

The goal of the problem interview is to discover a problem painful enough that you know people want it solved. And ideally, the people you’re speaking to are begging you for the solution. The next step in the process is the solution interview, so if you get there with people that’s a good sign.

At the end of every interview, you should be asking for referrals to other interviewees. There’s a good chance the people your subjects recommend are similar in demographics and share the same problems. Perhaps more importantly at this stage, you want to see if the subjects are willing to help out further by referring people in their network. This is a clear indicator that they don’t feel sheepish about introducing you, and that they think you’ll make them look smarter. If they found you annoying, they likely won’t suggest others you might speak with.

Although having someone offer you money is more likely during the solution interviews (when you’re actually walking through the solution with people), this is still a good “gut check” moment. And certainly it’s a bonus if people are reaching for their wallets.
Calculating the Scores A score of 31 or higher is a good score. Anything under is not. Try scoring all the interviews, and see how many have a good score. This is a decent indication of whether you’re onto something or not with the problems you want to solve. Then ask yourself what makes the good-score interviews different from the bad-score ones. Maybe you’ve identified a market segment, maybe you have better results when you dress well, maybe you shouldn’t do interviews in a coffee shop. Everything is an experiment you can learn from. You can also sum up the rankings for the problems that you presented. If you presented three problems, which one had the most first-place rankings? That’s where you’ll want to dig in further and start proposing solutions (during solution interviews). The best-case scenario is very high interview scores within a subsection of interviewees where those interviewees all had the same (or very similar) rankings of the problems. That should give you more confidence that you’ve found the right problem and the right market.

Case study | Cloud9 IDE Interviews Existing Customers Cloud9 IDE is a cloud-based integrated development environment (IDE) that enables web and mobile developers to work together and collaborate in remote teams anywhere, anytime. The platform is primarily for JavaScript and Node.js applications, but it’s expanding to support other languages as well. The company has raised Series A financing from Accel and Atlassian. Although the Cloud9 IDE team is well past the initial problem interview stage, they regularly speak with customers and engage in systematic customer development. Product Manager Ivar Pruijn says, “We’re close to product/market fit, and it helps us a great deal to speak with customers, understanding if we’re meeting their needs and how they’re using our product.” Ivar took the scoring framework outlined previously and modified some of the questions for the types of interviews he was doing. “Since we’re now speaking with customers using our product, we asked slightly different questions, but we scored them just the same,” he says. The first two questions that Ivar asked himself after conducting an interview were: 1. Did the interviewee mention problems in his/her workflow that our product solves or will solve soon? 2. Is the interviewee actively trying to solve the problems our product solves/will solve soon, or has he/she done so in the past? “With these questions, we’re trying to determine how well we’re solving problems for actual customers. If many of the scores would have been low, we would have known something was wrong,” he says. Happily, most of the interview scores were good, but Ivar was able to dig deeper and learn more. “I was able to identify the customer types to focus on for product improvements. I noticed that two specific customer segments scored the highest on the interviews, especially the first two scoring criteria about meeting their needs and solving their problems.” After scoring the initial interviews, Ivar then verified the results and the scoring in two ways. First, he interviewed some of the company’s top active users, gaining an in-depth knowledge of how they work. Second, he analyzed the data warehouse, which has information on how the product is being used. Both of these confirmed his initial findings: two

specific segments of customers were getting significantly more value from the product. “Interestingly, both of these customer groups weren’t the initial ones we were going after,” he says. “So now we know where we can invest more of our time and energy.” In this case, open-ended discussions followed by scoring—even when the company was beyond the initial Empathy stage—revealed a market segment that had better stickiness and was ripe for rapid growth. What’s more, Ivar says that scoring the interview questions helped him improve his interviewing over time, focusing on results that could be acted upon.
Summary • Cloud9 IDE decided to run scored customer interviews even though the company was well past the Empathy stage. • The interviews showed that customers were happy, but also revealed two specific customer segments that derived higher value from the product. • Using this insight, the company compared analytics data and verified that these groups were indeed using the product differently, which is now driving the prioritization of features and marketing.
Analytics Lessons Learned You can talk to customers and score interviews at any stage of your startup. Those interviews don’t just give you feedback,they also help you identify segments of the market with unique problems or needs that you might target.
How Are People Solving the Problem now? One of the telltale signs that a problem is worth solving is when a lot of people are already trying to solve it or have tried to do so in the past. People will go to amazing lengths to solve really painful problems that matter to them. Typically, they’re using another product that wasn’t meant to solve their problem, but it’s “good enough,” or they’ve built something themselves. Even though you’re doing qualitative interviews, you can still crunch some numbers afterward: • How many people aren’t trying to solve the problem at all? If people haven’t really made an attempt to solve the problem, you have to be very cautious about moving forward. You’ll have to make them aware of the problem in the first place.

• How many volunteer a solution that’s “good enough”? You’ll spend more time on solutions when you do solution interviews, but startups regularly underestimate the power of “good enough.” Mismatched socks are a universal problem nobody’s getting rich fixing. Too often, idealistic startups underestimate a market’s inertia. They attack market leaders with features, functionality, and strategies that aren’t meaningful enough to customers. Their MVP has too much “minimum” to provoke a change. They assume that what they’re doing—whether it’s a slicker UI, simpler system, social functionality, or something else—is an obvious win. Then “good enough” bites them in the ass. The bar for startups to succeed at any real scale is much higher than that of the market leaders. The market leaders are already there, and even if they’re losing ground, it’s generally at a slow pace. Startups need to scale as quickly as possible. You have to be 10 times better than the market leader before anyone will really notice, which means you have to be 100 times more creative, strategic, sneaky, and aggressive. Market leaders may be losing touch with their customers, but they still know them better than anyone else. You need to work much harder to win customers from incumbents. Don’t just look at the “obvious” flaws of the incumbents (like an outdated design) and assume that’s what needs fixing. You’ll have to dig far deeper in order to find the real customer pain points and make sure you address them quickly and successfully.
Are there Enough People Who Care About this Problem? (or, understanding the Market) If you find a problem that’s painful enough for people, the next step is to understand the market size and potential. Remember, one customer isn’t a market, and you have to be careful about solving a problem that too few people genuinely care about. If you’re trying to estimate the size of a market, it’s a good idea to do both a top-down and a bottom-up analysis, and compare the results. This helps to check your math. A top-down analysis starts with a big number and breaks it into smaller parts. A bottom-up one does the reverse. Consider, for example, a restaurant in New York City. • A top-down model would look at the total money people spend dining out in the US, then the percentage of that in New York, then the number of restaurants in the city, and finally calculate the revenues for a single restaurant.

• A bottom-up model would look at the number of tables in a restaurant, the percent that are occupied, and the average price per party. Then it would multiply this by days of the year (adjusting for seasonality). This is an oversimplification—there are plenty of other factors to consider such as location, type of restaurant, and so on. But the end result should provide two estimates of annual revenue. If they’re wildly different, something is wrong with your business model. As you’re conducting problem interviews, remember to ask enough demographic-type questions to understand who the interviewees are. The questions you’ll ask will depend a great deal on who you’re speaking to and the type of business you’re starting. If you’re going after a business market, you’ll want to know more about a person’s position in the company, buying power, budgeting, seasonal influences, and industry. If you’re going after a consumer, you’re much more interested in lifestyle, interests, social circles, and so on.
What Will It take to Make them Aware of the Problem? If the subjects don’t know they have the problem—but you have good evidence that the need really exists—then you need to understand how easily they’ll come to realize it, and the vectors of awareness. Be careful. Most of the time, when people don’t have a problem, they’ll still agree with you. They don’t want to hurt your feelings. To be nice, they’ll pretend they have the problem once you alert them to it. If you’re convinced that people have the problem—and just need to be made aware of it—you need to find ways to test that assumption. Some ways to get a more honest answer from people are: • Get them a prototype early on. • Use paper prototyping, or a really simple mockup in PowerPoint, Keynote, or Balsamiq, to watch how they interact with your idea without coaching. • See if they’ll pay immediately. • Watch them explain it to their friends and see if they understand how to spread the message. • Ask for referrals to others who might care.

A “Day in the Life” of your Customer During problem interviews, you want to get a deep understanding of your customer. We mentioned collecting demographics earlier and looking for ways to bucket customers into different groups, but you can take this a step further and gain a lot more insight. You can get inside their heads. Customers are people. They lead lives. They have kids, they eat too much, they don’t sleep well, they phone in sick, they get bored, they watch too much reality TV. If you’re building for some kind of idealized, economically rational buyer, you’ll fail. But if you know your customers, warts and all, and you build things that naturally fit into their lives, they’ll love you. To do this, you need to infiltrate your customer’s daily life. Don’t think of “infiltrate” as a bad word. In order for you to succeed, customers need to use your application; if you want them to do so, you need to slot yourself into their lives in an effortless, seamless way. Understanding customers’ daily lives means you can map out everything they do, and when they do it. With the right approach, you’ll start to understand why as well. You’ll identify influences (bosses, friends, family members, employees, etc.), limitations, constraints, and opportunities. One tactic for mapping this out is a “day in the life” storyboard. A storyboard is visual—it’s going to involve lots of multicolored sticky notes plastered on the wall—and it allows you to navigate through a customer’s life and figure out where your solution will have the most impact. Figure 15-1 shows an example of a storyboard. Having this map in place makes it much easier to come up with good hypotheses around how, when, and by whom your solution will be used. You can experiment with different tactics for interrupting users and infiltrating their lives. The right level of positive access will allow to use your product successfully.. Mapping a day in the life of your customer will also reveal obvious holes in your understanding of your customer, and those are areas of risk you may want to tackle quickly. With a clearer understanding of when and how your solution will be used, you have a better chance of defining a minimum viable product feature set that hits the mark. The “day in the life” exercise is a way of describing a very detailed, human use case for your solution that goes beyond simply defining target markets and customer segments. After all, you’ll be selling to people. You need to know how to reach them, interrupt them, and make a difference in their lives at the exact moment when they need your solution.

User experience designers also rely on mental models of their users to understand how people think about something. A mental model is simply the mental representation of something in the real world—often a simplified version of reality that helps someone work with a thing. Sometimes these are metaphors—the recycle bin on a computer, for example. Other times, they’re simple, fundamental patterns that live deep down in our reptile brains—team allegiance, or xenophobia. Adaptive Path co-founder Indi Young has written extensively about mental models, developing a number of ways to link your customers’ lives and patterns with the products, services, and interactions you have with them.* Figure 15-2 shows an example of Indi’s work, listing a customer’s morning behaviors alongside various product categories.†

Outlining your customers’ behaviors as they go about a particular task, then aligning your activities and features with those behaviors, is a good way to identify missed opportunities to improve engagement, upsell, endorse, or otherwise influence your buyers. If you’re making a personal fitness tool, timing interactions with gym visits, holiday binges, and morning ablutions lets you create a more tailored, engaging experience.
Pattern | Finding People to talk to The modern world isn’t inclined to physical interaction. We have dozens of ways to engage people at a distance, and when you’re trying to find a need, they’re mostly bad. Unless you’re face-to-face with prospects, you won’t see the flinches, the subtle body language, and the little gasps and shrugs that mean the difference between a real problem and a waste of everyone’s time. That doesn’t mean technology is bad. We have a set of tools for finding prospects that would have seemed like superpowers to our predecessors. Before you get the hell out of the office, you need to find people to talk with. If you can find these people efficiently, that bodes well: it means that, if they’re receptive to your idea, you can find more like them and build your customer base. Here are some dumb, obvious, why-didn’t-I-think-of-that ways to find people to talk to, mail, and learn from.

twitter’s Advanced Search For startups, Twitter is a goldmine. Its asymmetric nature—I can follow you, but you don’t have to follow me back—and relatively unwalled garden means people expect interactions. And we’re vain; if you mention someone, he’ll probably come find out what you said and who you are. Provided you don’t abuse this privilege, it’s a great way to find people. Let’s say you’re building a product for lawyers and want to talk to people nearby. Put keywords and location information into Twitter’s Advanced Search, as shown in Figure 15-3.

You’ll get a list of organizations and people who might qualify similar to the one in Figure 15-4. Now, if you’re careful, you can reach out to them. Don’t spam them; get to know them a bit, see where they live and what they say, and when they mention something relevant—or when you feel comfortable doing so—speak up. Just mention them by name, invite them to take a survey, and so on. There are other interesting tools for digging into Twitter and finding people. Moz has a tool called Followerwonk, and there’s also the freely available people search engine, Twellow.

LinkedIn Another huge boon to startups everywhere is LinkedIn. You can access a tremendous amount of demographic data through searches like the one in Figure 15-5. You don’t need to connect to these people on LinkedIn, because you can just find their names and numbers, look up their firms’ phone numbers, and start dialing. But if you do have a friend in common, you’ll find that a warm intro works wonders. LinkedIn also has groups which you can search through and join. Most of these groups are aligned around particular interests, so you can find relevant people and also do some background research.

Facebook Facebook is a bit more risky to mine, since it’s a reciprocal relationship (people have to friend you back). But you’ll get a sense of the size of a market from your search results alone, as seen in Figure 15-6, and you might find useful groups to join and invite to take a test or meet for a focus-group discussion.

Some of these approaches seem blindingly obvious. But a little preparation before you get out of the office—physically or virtually— can make all the difference, giving you better data sooner and validating or repudiating business assumptions in days instead of weeks.
getting Answers at Scale You should continue doing customer interviews (after the first 10–20 or so) and iterate on the questions you ask, dig deeper with people, and learn as much as you can. But you can also expand the scope of your efforts and move into doing some quantitative analysis. It’s time to talk to people at scale. This does several things: • It forces you to formalize your discussions, moving from subjective to objective. • It tests whether you can command the attention—at scale—that you’ll need to thrive.

• It gives you quantitative information you can analyze and segment, which can reveal patterns you won’t get from individual groups. • The respondents may become your beta users and the base of your community. To talk with people at scale you can employ a number of tactics, including surveys and landing pages. These give you the opportunity to reach a wider audience and build a stronger, data-driven case for the qualitative feedback you received during interviews. Case study | LikeBright “Mechanical turks” Its Way into techStars LikeBright is an early-stage startup in the dating space that joined the TechStars Seattle accelerator program in 2011. But it wasn’t an easy road. Founder Nick Soman says that at first Andy Sack, managing director of the Seattle program, rejected LikeBright, saying, “We don’t think you understand your customer well enough.” With the application deadline looming, Andy gave Nick a challenge: go speak to 100 single women about their frustrations with dating, and then tell TechStars what he’d learned. Nick was stuck. How was he going to speak with that many women quickly enough? He didn’t think it was possible, at least not easily. And then he decided to run an experiment with Mechanical Turk.* Mechanical Turk is a service provided by Amazon that allows you to pay small amounts of money for people to complete simple tasks. It’s often used to get quick feedback on things like logos and color choices, or to perform small tasks such as tagging a picture or flagging spam. The idea was to use Mechanical Turk to survey 100 single women, putting out a task (or what Mechanical Turk calls a HIT) asking women (who fit a particular profile) to call Nick. In exchange he paid them $2. The interviews typically lasted 10–15 minutes. “In my research, I found that there’s a good cross-section of people on Mechanical Turk,” says Nick. “We found lots of highly educated, diverse women that were very willing to speak with us about their dating experiences.”

Nick set up several Google Voice phone numbers (throwaway numbers that couldn’t be tracked or reused) and recruited a few friends to help him out. He prepared a simple interview script with open-ended questions, since he was digging into the problem validation stage of his startup. Nick says, “I was amazed at the feedback I got. We were able to speak with 100 single women that met our criteria in four hours on one evening.” As a result, Nick gained incredible insight into LikeBright’s potential customers and the challenges he would face building the startup. He went back to TechStars and Andy Sack with that know-how and impressed them enough to get accepted. LikeBright’s website is now live with a 50% female user base, and recently raised a round of funding. Nick remains a fan of Mechanical Turk. “Since that first foray into interviewing customers, I’ve probably spoken with over 1,000 people through Mechanical Turk,” he says.
Summary • LikeBright used a technical solution to talk to many end users in a short amount of time. • After talking to 100 prospects in 24 hours, the founders were accepted to a startup accelerator. • The combination of Google Voice and Mechanical Turk proved so successful that LikeBright continues to use it regularly.
Analytics Lessons Learned While there’s no substitute for qualitative data, you can use technology to dramatically improve the efficiency of collecting that data. In the Empathy stage, focus on building tools for getting good feedback quickly from many people. Just because customer development isn’t code doesn’t mean you shouldn’t invest heavily in it.

LikeBright chose Mechanical Turk to reach people at scale, but there are plenty of other tools. Surveys can be effective, assuming you’ve done enough customer development already to know what questions to ask. The challenge with surveys is finding people to answer them. Unlike the one-toone interviews you’ve been conducting so far, here you need to automate the task and deal with the inevitable statistical noise. If you have a social following or access to a mailing list, you can start there, but often, you’re trying to find new people to speak with. They’re new sources of information, and they’re less likely to be biased. That means reaching out to groups with whom you aren’t already in touch, ideally through software, so you’re not curating each invitation by hand. Facebook has an advertising platform for reaching very targeted groups of people. You can segment your audience by demographics, interests, and more. Although the click-through rate on Facebook ads is extremely low, you’re not necessarily looking for volume at this stage. Finding 20 or 30 people to speak with is a great start, plus you can test messaging this way, through the ads you publish, as well as the subsequent landing pages you have to encourage people to connect with you. You can advertise on LinkedIn to very targeted audiences. This will cost you some money, but if you’ve identified a good audience of people through searching LinkedIn contacts and groups, you might consider testing some early messaging through its ad platform. Google makes it really easy to target campaigns. If you want to promote a survey or signup on the Web, you can do so with remarkable precision. In the first step of setting up an AdWords campaign, you get to specify the location, language, and other information that targets the ad, as shown in Figure 15-7.

Once you’ve done that, you can create your message, using a screen like the one in Figure 15-8. This is an excellent way to try out different taglines and approaches: even the ones that don’t get clicks show you something, because you know what not to say. Try different appeals to basic emotions: fear, greed, love, wealth, and so on. Learn what gets people clicking and what keeps them around long enough to fill out a survey or submit an email.

Google also has a survey offering, called Google Consumer Surveys, that’s specifically designed to collect consumer information.* Because of the wide reach of Google’s publishing and advertising network, the company can generate results that are statistically representative of segments of the population as a whole. Google’s technique uses a “survey wall” approach, but by simplifying the survey process to individual questions requiring only a click or two, the company achieves a 23.1% response rate (compared to less than 1% for “intercept” surveys, 7–14% for phone surveys, and 15% for Internet panels).† However, because of the quick-response format, it’s hard to collect multiple responses and correlate them, which limits the kinds of analysis and segmentation you can do.

Pattern | Creating an Answers-at-Scale Campaign An effective survey involves several critical steps: survey design, testing, distribution, and analysis. But before you do any of these, know why you’re asking the questions in the first place. Lean is all about identifying and quantifying the risk. What kind of uncertainty are you trying to quantify by doing a survey? • If you’re asking what existing brands come to mind in a particular industry, will you use this information to market alongside them? Address competitive threats? Choose partners? • If you’re asking how customers try to find a product or service, will this inform your marketing campaigns and choice of media? • If you’re asking how much money people spend on a problem you’re planning to address, how will this shape your pricing strategy? • If you’re testing which tagline or unique value proposition resonates best with customers, will you choose the winning one, or just take that as advice? Don’t just ask questions. Know how the answers to the questions will change your behavior. In other words, draw a line in the sand before you run the survey. Your earlier problem interviews showed you an opportunity; now, you’re checking to see whether that opportunity exists in the market as a whole. For each quantifiable question, decide what would be a “good” score. Write it down somewhere so you’ll remember.
Survey Design Your survey should include three kinds of questions: • Demographics and psychographics you can use to segment the responses, such as age, gender, or Internet usage. • Quantifiable questions that you can analyze statistically, such as ratings, agreement or disagreement with a statement, or selecting something from a list. • Open-ended questions that allow respondents to add qualitative data. Always ask the segmentation questions up front and the open-ended ones at the end. That way you know if your sample was representative of the market you’re targeting, and if people don’t finish the last questions, you still have enough quantitative responses to generate results in which you can be confident.

test the Survey Before sending it out, try the survey on people who haven’t seen it. You’ll almost always find they get stuck or don’t understand something. You’re not ready to send the survey out until at least three people who haven’t seen it, and are in your target market, can complete it without questions and then explain to you what each question meant. This is no exaggeration: everyone gets surveys wrong.
Send the Survey Out You want to reach people you don’t know. You could tweet out a link to the survey form or landing page, but you’ll naturally get respondents who are in your extended social circle. This is a time when it makes sense to pay for access to a new audience. Design several ads that link to the survey. They can take several forms: • Name the audience you’re targeting. (“Are you a single mom? Take this brief survey and help us address a big challenge.”) • Mention the problem you’re dealing with. (“Can’t sleep? We’re trying to fix that, and want your input.”) • Mention the solution or your unique value proposition, without a sales pitch. (“Our accounting software automatically finds tax breaks. Help us plan the product roadmap.”) Be careful not to lead the witness; don’t use this if you’re still trying to settle on positioning. Remember, too, that the first question you’re asking is “Was my message compelling enough to convince them to take the survey?” You’re trying out a number of different value propositions. In some cases, you don’t even care about a survey—we know one entrepreneur who tried out various taglines, all of which pointed to a spam site. All he needed to know was which one got the most clicks, and he didn’t want to tell anyone who he was yet. You can also use mailing lists. Some user groups or newsletters may be willing to feature you on their page or in a mail-out if what you’re doing is relevant to their audience.
Collect the Information When you run the survey, measure your cost per completed response. Do a small test of a few dozen responses first. If your numbers are low, check whether people are abandoning on a particular form field—some analytics tools like ClickTale let you do this. Then remove that field

and see if completion rates go up. You can also try breaking up the survey into smaller ones, asking fewer questions, or changing your call to action. While you’re collecting information, don’t forget to also request permission to contact respondents and collect contact information. If you’ve found a workable solution to a real problem, some of them may become your beta customers.
Analyze the Data Finally, crunch the data properly. You’re actually looking at three things. • First, were you able to capture the attention of the market? Did people click on your ads and links? Which ones worked best? • Second, are you on the right track? What decisions can you now make with the data you’ve collected? • Third, will people try out your solution/product? How many of your respondents were willing to be contacted? How many agreed to join a forum or a beta? How many asked for access in their open-ended responses? Statistics are important here. Don’t skimp on the math—make sure you learn everything you can from your efforts. • Calculate the average, mean, and standard deviation of the quantifiable questions. Which slogan won? Which competitor is most common? Was there a clear winner, or was the difference marginal? • Analyze each quantifiable question by each segment to see if a particular group of your respondents answered differently. Use pivot tables for this kind of analysis (see the upcoming sidebar “What’s a Pivot Table?” for details); you’ll quickly see if a particular response correlated to a particular group. This will help you focus your efforts or see where one set of answers is skewing the rest.

What’s a Pivot Table?
Most of us have used a spreadsheet. But if you want to take your analytical skills to the next level, you need to move up to pivot tables. this feature lets you quickly analyze many rows of data as if it were a database, without, well, the database. Imagine that you have 1,000 responses to a survey. each response is a row in a spreadsheet, containing a number of fields of data. the first column has time and date, the next has email, and the rest have the individual responses that particular respondents gave. Imagine, for example, that your survey asked respondents their gender, the number of hours per week that they play video games, and their age, as shown in the following table.

You can simply tally up the columns and see what the average responses were—that people play 8.85 hours a week (as shown in the preceding figure). But that’s only a basic analysis, and a misleading one. More often, you want to compare responses against one another—for example, do men play more video games than women? that’s what a pivot table is for. First, you tell the pivot table where to get the source data, then you specify the dimension by which to segment, and then you set what kind of computation you want (such as the average, the maximum value, or the standard deviation) as shown here:

the real power of pivot tables, however, comes when you analyze two segments against each other. For example, if we have categories for gender and age, we can gain even more insight, as shown here:

Build It Before you Build It (or, How to Validate the Solution) With a validated problem in hand, it’s time to validate the solution. Once again, this starts with interviewing customers (what Lean Startup describes as solution interviews) to get the qualitative feedback and confidence necessary to build a minimum viable product. You can also continue and expand on quantitative testing through surveys and landing pages. This provides you with a great opportunity to start testing your messaging (unique value proposition from Lean Canvas) and the initial feature set. There are other practical ways of testing your solution prior to actually building it. By this point, you should have identified the riskiest aspects of the solution and what you need people to do with the solution (if it existed) in order to be successful. Now look for a way of testing your hypotheses through a proxy. Map the behavior you want people to do onto a similar platform or product, and experiment. Hack an adjacent system.
Case study | Localmind Hacks twitter Localmind is a real-time question-and-answer platform tied to locations. Whenever you have a question that’s relevant to a location— whether that’s a specific place or an area—you can use Localmind to get an answer. You send the question out through the mobile application, and people answer. Before writing a line of code, Localmind was concerned that people would never answer questions. The company felt this was a huge risk; if questions went unanswered, users would have a terrible experience and stop using Localmind. But how could it prove (or disprove) that people would answer questions from strangers without building the app? The team looked to Twitter and ran an experiment. Tracking geolocated tweets (primarily in Times Square, because there were lots of them there over several days), they sent @ messages to people who had just tweeted. The messages would be questions about the area: how busy is it, is the subway running on time, is something open, etc. These were the types of questions they believed people would ask through Localmind. The response rate to their tweeted questions was very high. This gave the team the confidence to assume that people would answer questions about where they were, even if they didn’t know who was asking. Even though Twitter wasn’t the “perfect system” for this kind of test because

there were lots of variables (e.g., the team didn’t know if people would get a push notification on a tweet to them or notice the tweet), it was a good enough proxy to de-risk the solution, and convince the team that it was worth building Localmind.
Summary • Localmind identified a big risk in its plan—whether people would answer questions from strangers—and decided to quantify it. • Rather than writing code, the team used tweets with location information. • The results were quick and easy, and sufficient for the team to move forward with an MVP.
Analytics Lessons Learned Your job isn’t to build a product; it’s to de-risk a business model. Sometimes the only way to do this is to build something, but always be on the lookout for measurable ways to quantify risk without a lot of effort.
Before you Launch the MVP As you’re building your bare-minimum product—just enough functionality to test the risks you’ve identified in the Empathy stage—you’ll continue to gather feedback (in the form of surveys) and acquire early adopters (through a beta enrollment site, social media, and other forms of teasing). In this way, by the time you launch the MVP you’ll have a critical mass of testers and early adopters eager to give you feedback. You’re farming test subjects. Your OMTM at this point is enrollments, social reach, and other indicators that you’ll be able to drive actual users to your MVP so you can learn and iterate quickly. This is the reverse Field of Dreams moment: if they come, you will build it. It’s hard to decide how good your minimum product should be. On the one hand, time is precious, and you need to cut things ruthlessly. On the other hand, you want users to have an “aha!” moment, that sense of having discovered something important and memorable worth solving. You need to keep the magic.
Clarke’s Third Law: Any sufficiently advanced technology is indistinguishable from magic. Arthur C. Clarke, Profiles of the Future, 1961

Gehm’s Corrollary: Any technology distinguishable from magic is insufficiently advanced. Barry Gehm, ANALOG, 1991
Deciding What goes into the MVP Take all of your solution interviews, quantitative analysis, and “hacks,” and decide what feature set to launch for your MVP. The MVP has to generate the value you’ve promised to users and customers. If it’s too shallow, people will be disinterested and disappointed. If it’s too bloated, people will be confused and frustrated. In both cases, you’ll fail. It’s important to contrast an MVP with a smoke-test approach where you build a teaser site—for example, a simple page generated in LaunchRock with links to social networks. With a smoke-test page, you’re testing the risk that the message isn’t compelling enough to get signups. With the MVP, you’re testing the risk that the product won’t solve a need that people want solved in a way that will make them permanently change their behavior. The former tests the problem messaging; the latter, the solution effectiveness. Circle back with interviewees as you’re designing the MVP. Show them wireframes, prototypes, and mockups. Make sure you get the strong, positive reaction you’re looking for before building anything. Cut everything out that doesn’t draw an extremely straight line, from your validated problem, to the unique value proposition, to the MVP, to the metrics you’ll use to validate success. It’s important to note that the MVP is a process, not a product. This is something we learned at Year One Labs working with multiple startups all at a similar stage. The knee-jerk reaction once you’ve decided on the feature set is to build it and gun for traction as quickly as possible, turning on all the marketing tactics possible. As much as we all understand that seeing our name in lights on a popular tech blog doesn’t really make a huge difference, it’s still great when it’s there. But sticking with Lean Startup’s core tenet—build→measure→learn—it’s important to realize that an MVP will go through numerous iterations before you’re ready to go to the next step.

Measuring the MVP The real analytical work starts the minute you develop and launch an MVP, because every interaction between a customer and the MVP results in data you can analyze. For starters, you need to pick the OMTM. If you don’t know what it should be, and you haven’t defined “success” for that metric, you shouldn’t be building anything. Everything you build into the initial MVP should relate to and impact the OMTM. And the line in the sand has to be clearly drawn. At this stage, metrics around user acquisition are irrelevant. You don’t need hundreds of thousands of users to prove if something is working or not. You don’t even need thousands. Even with the most complicated of businesses, you can narrow things down significantly: • If you’re building a marketplace for used goods, you might focus on one tiny geographic area, such as house listings in Miami. • The same holds true for any location-based application where density is important—a garage sale finder that’s limited to one or two neighborhoods. • You might pick one product type for your marketplace test—say, X-Men comics from the 80s—validate the business there, and then expand. • Maybe you want to test the core game mechanics of your game. Release a mini-game as a solo application and see what engagement is like. • Perhaps you’re building a tool for parents to connect. See if it works in a single school. The key is to identify the riskiest parts of your business and de-risk them through a constant cycle of testing and learning. Metrics is how you measure and learn whether the risk has been overcome. Entrepreneur, author, and investor Tim Ferriss, in an interview with Kevin Rose, said that if you focus on making 10,000 people really happy, you could reach millions later.* For the first launch of your MVP, you can think even smaller, but Ferriss’s point is absolutely correct: total focus is necessary in order to make genuine progress. The most important metrics will be around engagement. Are people using the product? How are they using the product? Are they using all of the product or only pieces of it? Is their usage and behavior as expected or different?

No feature should be built without a corresponding metric on usage and engagement. These sub-metrics all bubble up to the OMTM; they’re pieces of data that, aggregated, tell a more complete story. If you can’t instrument a feature or component of your product, be very careful about adding it in—you’re introducing variables that will become harder and harder to control. Even as you focus on a single metric, you need to be sure you’re actually adding value. Let’s say you launch a new SaaS product, and you assume that if someone doesn’t use it in 30 days, he’s churned. That means it’ll be 30 days before you know your churn rate. That’s much too long. Customers always churn, but if you’re not writing them off quickly, you may think you have more engagement than you really do. Even if initial engagement is strong, you need to measure whether you’re delivering value. You might, for example, look at the time between visits. Is it the same? Or does it gradually drop off? You might find a useful leading indicator along the way.
Don’t Ignore Qualitative Analytics You should be speaking with users and customers throughout the MVP process. Now that they have a product in their hands, you can learn a great deal from them. They’ll be less inclined to lie or sugarcoat things—after all, you made a promise of some kind and now they have a high expectation that you’ll deliver. Early adopters are forgiving, and they’re OK with (and in fact, crave) roughly hewn products, but at the same time their feedback will become more honest and transparent as their time with the MVP increases.
Be Prepared to Kill Features It’s incredibly hard to do, but it can make a huge difference. If a feature isn’t being used, or it’s not creating value through its use, get rid of it and see what happens. Once you’ve removed a feature, continue to measure engagement and usage with existing users. Did it make a difference? If nobody minds, you’ve cleaned things up. If the existing users protest, you may need to revisit your decision. And if a new cohort of users—who’d never seen the feature before it was removed—start asking for it, they may represent a new segment with different needs than your existing user base. The narrowing of your focus and value proposition through the elimination of features should have an impact on how customers respond.

Case study | Static Pixels Eliminates a Step in Its Order Process Static Pixels is an early-stage startup founded by Massimo Farina. The company allows you to order prints of your Instagram photos on recycled cardboard. When the company first launched, it had a feature called InstaOrder, which allowed you to order photos directly from Instagram. Massimo believed that InstaOrder would make it easier for customers to use his service and increase the volume of orders. “We built the feature based on pre-launch feedback, and the assumption that users would like it,” Massimo said. The company spent two weeks building the feature—a costly amount of development time for a small team—but after releasing the feature found it wasn’t used much. Massimo said, “Turns out, the feature was confusing people and making the checkout process more complicated.” As Figure 15-9 shows, the first-time ordering process with InstaOrder had an extra step, and that step required going to PayPal to preauthorize payments. The hypothesis was that the feature would be worth the first-time ordering pain, after which ordering would be much easier directly through Instagram. “Convenience was the hypothesis,” noted Massimo. But Massimo and his team were wrong. Not only were orders low, but page views started to drop on the landing page that promoted the feature, and bounce rate was high as well. It just wasn’t resonating. Two weeks after the feature was removed, the number of transactions doubled, and it continues to increase. The bounce rate on the new landing page improved while sign-in goal completions increased. So what did the Static Pixels team learn? “For starters, I think people didn’t transact through Instagram because it’s a very new and foreign process,” Massimo said. “Ordering products via a native social platform interface hasn’t really been done before. Also, I believe that when people are posting photos to Instagram, they aren’t necessarily thinking about ordering prints of that photo.” The company lost some development time, but through a focus on analytics—particularly on its key metric of prints ordered—it identified roadblocks in its process, made tough decisions on removing a feature (which it originally thought was one of its unique value propositions), and then tracked the results.

Summary • The way Static Pixels asked users to buy had too much friction. • A lighter-weight approach, with fewer steps, was both easier to implement and increased conversion rates.
Analytics Lessons Learned Building a more advanced purchasing system that sacrificed firstpurchase simplicity for long-term ease of repeat purchases seemed like a good idea, but it was premature. This early in the company’s life, the question was “Will people buy prints?” and not “Will we have loyal buyers?” The feature the team had built was de-risking the wrong question. Always know what risk you’re eliminating, and then design the minimum functionality to measure whether you’ve overcome it.

A Summary of the Empathy Stage • Your goal is to identify a need you can solve in a way people will pay money for at scale. Analytics is how you measure your way from your initial idea to the realization of that goal. • Early on, you conduct qualitative, exploratory, open-ended discussions to discover the unknown opportunities. • Later, your discussions become more quantitative and more convergent, as you try to find the right solution for a problem. • You can use tools to get answers at scale and build up an audience as you figure out what product to build. Once you have a good idea of a problem you’re going to solve, and you’re confident that you have real interest from a sizeable market you know how to reach, it’s time to build something that keeps users coming back. It’s time to get sticky.
exerCise | Should you Move to the next Stage? Answer the following questions.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: