Lean Analytics 21

Am I good Enough?

One of the biggest questions we wanted to tackle with Lean Analytics is “what’s normal?” It’s something we get asked all the time: “How do I know what’s a normal or ideal value for the metrics I’m tracking? How do I know if it’s going well or not? Should I keep optimizing this metric, or move on to something else?” At the outset, many people cautioned us against trying to find a typical value for a particular metric. After all, startups are, by definition, trying to break the rules, which means the rules are being rewritten all the time. But we think it’s important to try to define “normal” for two big reasons. First, you need to know if you’re in the ballpark. If your current behavior is outrageously far from that of everyone else, you should be aware of it. If, on the other hand, you’re already as good as you’re going to get—move on. You’ve already optimized a key metric, and you’ll get diminishing returns trying to improve it further. Second, you need to know what sport you’re playing. Online metrics are in flux, which makes it hard to find a realistic baseline. Only a few years ago, for example, typical e-commerce conversion rates were in the 1–3% range. The best-in-class online retailers got a 7–15% conversion rate, because they had offline mindshare or had worked hard to become the “default” tool for purchase. These numbers have changed in recent years, though, because people now consider the Web the “default” storefront for many purchases. Today, pizza delivery companies have extremely high conversion rates because, well, that’s how you buy pizza.

In other words: there is a normal or ideal for most metrics, and that normal will change significantly as a particular business model goes from being novel to being mainstream. Case study | WP Engine Discovers the 2% Cancellation Rate WP Engine is a fast-growing hosting company specializing exclusively in hosting WordPress sites.* Successful entrepreneur and popular blogger Jason Cohen founded the company in July 2010. In November 2011, WP Engine raised $1.2M in financing to accelerate growth and handle the ongoing challenges of scaling the business. WP Engine is a service company. Its customers rely on WP Engine to provide fast, quality hosting with constant uptime. WP Engine is doing a great job, but customers still cancel. All companies have cancellations (or churn), and it’s one of the most critical metrics to track and understand—not only is it essential for calculating metrics like customer lifetime value, but it’s also an early warning signal that something is going wrong or that a competing solution has emerged. Having a cancellation number isn’t enough; you need to understand why people are abandoning your product or service. Jason did just that by calling customers who cancelled. “Not everyone wanted to speak with me; some people never responded to my calls,” he recalls. “But enough people were willing to talk, even after they had left WP Engine, that I learned a lot about why they were leaving.” According to Jason, most people leave WP Engine because of factors outside of the company’s control (such as the project ending where hosting was needed), but Jason wanted to dig further. Having a metric and an understanding of the reasons people were leaving wasn’t enough. Jason went out and found a benchmark for cancellation rate. This is one of the most challenging things for a startup to do: find a relevant number (or line in the sand) against which to compare yourself. Jason researched the hosting space using his investors and advisors. One of WP Engine’s investors is Automattic, the company behind WordPress, which also has a sizeable hosting business.

Jason found that for established hosting companies, there’s a “best case scenario” benchmark for cancellation rate per month, which is 2%. That means every month—for even the best and biggest hosting companies around—you can expect 2% of your customers to leave. On the surface, that looks like a huge number. “When I first saw our churn, which was around 2%, I was very concerned,” Jason says. “But when I found out that 2% is pretty much the lowest churn you’ll get in the hosting business, it changed my perspective a great deal.” Had Jason not known that this is simply a fact of life in the hosting industry, WP Engine might have invested time and money trying to move a metric that wouldn’t budge—money that would have been far better spent elsewhere. Instead, with a benchmark in hand, Jason was able to focus on other issues and key performance indicators (KPIs), all the while keeping his eye on any fluctuation in cancellation rate. He doesn’t rule out the possibility of trying to break through the 2% cancellation rate at some point (after all, there can be significant value in reducing that churn), but he’s able to prioritize according to what’s going on in his business today, and where the biggest trouble spots lie, all while keeping an eye on the future success of the company.
Summary • WP Engine built a healthy WordPress hosting business, but losing 24% of customers every year concerned its founders. • By asking around, the founder discovered that a 2% per month churn rate was normal—even good—for that industry. • Knowing a good line in the sand allowed him to focus on other, more important business objectives instead of trying to overoptimize churn.
Analytics Lessons Learned It’s easy to get stuck on one specific metric that looks bad and invest considerable time and money trying to improve it. Until you know where you stand against competitors and industry averages, you’re blind. Having benchmarks helps you decide whether to keep working on a specific metric or move on to the next challenge.

Average Isn’t good Enough The Startup Genome project has collected key metrics from thousands of startups through its Startup Compass site.* Co-founder Bjoern Lasse Herrmann shared some of the metrics he’s gathered about an “average” startup. They serve as a sobering reminder that being average simply isn’t good enough. There’s a line in the sand, a point where you know you’re ready to move to the next KPI—and most companies aren’t anywhere near it. Consider this: if you get your churn rate below 5%—ideally as low as 2%—each month, you have a reasonably sticky product. Bjoern’s average is between 12% (for indirectly monetized sites) and 19% (for those that monetize directly from users)—nowhere near good enough to move to the next stage. Furthermore, consumer applications have a nearly 1:1 CAC to CLV ratio. That means they’re spending all the money they make acquiring new users. As we’ve seen, you’re doing well when you spend less than a third of your customer revenue acquiring new customers. For bigger-ticket applications (with a CLV of over $50K) things are less bleak, with most companies spending between 0.2% and 2% of CLV on acquisition. Startup Compass has some great comparative insight, and we encourage you to use it to measure yourself against other companies. But realize that there’s a reason most startups fail: average is nowhere near good enough.
What Is good Enough? There are a few metrics—like growth rate, visitor engagement, pricing targets, customer acquisition, virality, mailing list effectiveness, uptime, and time on site—that apply to most (if not all) business models. We’ll look at these next. Then, in the following chapters, we’ll dig into metrics specific to the six business models we’ve covered earlier. Remember, though, that while you might turn immediately to the chapter for your business model, there’s always some overlap and relevant metrics in other business models that should be helpful to you. So we encourage you to look at what’s normal for other business models, too.

growth Rate Investor Paul Graham makes a good case* that above all else, a startup is a company designed to grow fast. In fact, it’s this growth that distinguishes a startup from other new ventures like a cobbler or a restaurant. Startups, Paul says, go through three distinct growth phases: slow, where the organization is searching for a product and market to tackle; fast, where it has figured out how to make and sell it at scale; and slow again, as it becomes a big company and encounters internal constraints or market saturation, and tries to overcome Porter’s “hole in the middle.” At Paul’s startup accelerator, Y Combinator, teams track growth rate weekly because of the short timeframe. “A good growth rate during YC is 5–7% a week,” he says. “If you can hit 10% a week you’re doing exceptionally well. If you can only manage 1%, it’s a sign you haven’t yet figured out what you’re doing.” If the company is at the Revenue stage, then growth is measured in revenue; if it’s not charging money yet, growth is measured in active users.
Is growth at All Costs a good thing? There’s no question that growth is important. But focusing on growth too soon is bad. We’ve seen how inherent virality—that’s built into your product’s use—is better than artificial virality you’ve added as an afterthought. A flood of new visitors might grow your user base, but might also be detrimental to your business. Similarly, while some kinds of growth are good, other kinds aren’t sustainable. Premature scaling, such as firing up the paid engine before you’re sticky, can exacerbate issues with product quality, cash flow, and user satisfaction. It kills you just as you’re getting started. Sean Ellis notes that growth hackers are constantly testing and tweaking new ways of achieving growth, but that “during this process it is easy to lose sight of the big picture. When this happens, growth eventually falls off a cliff.” † He goes on to say, “Sustainable growth programs are built on a core understanding of the value of your solution in the minds of your most passionate customers.” As we saw in Chapter 5, Sean’s Startup Growth Pyramid illustrates that scaling your business comes only after you’ve found

product/market fit and your unfair advantage. In other words: stickiness comes before virality, and virality comes before scale. Most Y Combinator startups (and most startups, for that matter) focus on growth before they hit product/market fit. In some cases this is a necessity, particularly if the value of the startup depends on a network effect—after all, Skype’s no good if nobody else is using it. But while rapid growth can accelerate the discovery of product/market fit, it can just as easily destroy the startup if the timing isn’t right. Paul’s growth strategy is also a very B2C-biased way to look at the world. B2B organizations have a different flow, from a few early customers for whom they look like consultants, to later-stage customers who tolerate a more generic, standardized product or service. Growing a B2B organization prematurely can alienate your core of loyal customers who are helping to build your business, stalling revenue and eliminating the referrals, case studies, and testimonials needed to grow your sales. This is a universal problem, best described by the technology lifecycle adoption model, first proposed by George Beal, Everett Rogers, and Joe Bohlen,* and expanded by Geoffrey Moore:† it takes a lot of work to move from early adopters to laggards as the product becomes more mainstream and the barriers to adoption fall.
Bottom Line As you’re validating your problem and solution, ask yourself whether there are enough people who really care enough to sustain a 5% growth rate—but don’t strive for that rate of growth at the expense of really understanding your customers and building a meaningful solution. When you’re a prerevenue startup at or near product/market fit, your line in the sand should be 5% growth for active users each week, and once you’re generating revenues, they should grow at 5% a week.
number of Engaged Visitors Fred Wilson says that across Union Square Ventures’ portfolio companies, there’s a consistent ratio for engagement and concurrent users.‡ He says that for a web service or mobile application:

• 30% of registered users will use a web-based service at least once a month. For mobile applications, 30% of the people who download the app use it each month. • 10% of registered users will use the service or mobile app every day. • The maximum number of concurrent users will be 10% of the number of daily users. While it’s a huge generalization, Fred says this 30/10/10 ratio is consistent across a wide variety of applications, from social to music to games. Getting to this stage of regular use and engagement is a sign that you’re ready to start growing, and to move into the Virality, Revenue, and Scale stages of your business.
Bottom Line Aim for 30% of your registered users to visit once a month, and 10% of them to come daily. Figure out your reliable leading indicators of growth, and measure them against your business model predictions.
Pricing Metrics It’s hard to know what to charge. Every startup makes money from different things, so there’s no easy way to compare pricing across companies. But you can learn some lessons from different pricing approaches. A fundamental element of any pricing strategy is elasticity: when you charge more, you sell less; when you charge less, you sell more. Back in 1890, Alfred Marshall defined the price elasticity of demand as follows: The elasticity (or responsiveness) of demand in a market is great or small according as the amount demanded increases much or little for a given fall in price, and diminishes much or little for a given rise in price.* Unlike Marshall, you have the world’s greatest pricing laboratory at your disposal: the Internet. You can test out discount codes, promotions, and even varied pricing on your customers and see what happens. Let’s say you’ve run a series of tests on the price of your product. You know that when you change the price, you sell a certain number of items (see Table 21-1).

Table 21-1. How changing price affects sales
When we chart the resulting revenues, we get a characteristic curve (Figure 21-1). The best pricing is somewhere between $11 and $12, since this maximizes revenues.

If all we’re hoping for is revenue optimization, this is the optimal price point. But revenue isn’t everything: • Price yourself too high, and you may lose the war. Apple’s FireWire was a better communications technology, but Apple wanted to charge to license its patents, so USB won.* Sometimes charging too much can stall a market. • If you experiment with your users and word gets out, it can backfire, as it did for Orbitz when the company recommended more expensive products to visitors using Macs. • If you charge too little, you’ll arouse suspicion from buyers, who may wonder if you’re up to no good or you’re a scam. You may end up devaluing your offering in customers’ eyes.

• If you charge too much, you may slow down the much-needed viral growth or take too long to achieve network effects that improve your product’s functionality. • Some things—like healthcare—you can sell at nearly any price; others, like bottled water, sell more when a price boost increases perceived quality, as Pellegrino and Perrier will happily tell you. • If you make your pricing tiers simple, you’ll see better conversions. Patrick Campbell, co-founder and CEO of pricing service Price Intelligently, says that based on his data, companies with easy-to-understand tiers and a clear path up differentiated pricing plans convert customers at a much higher rate than companies with complicated tiers, features that aren’t always applicable, and hard-to-follow pricing paths. • Products that “fly under the radar” and don’t need a boss’s approval convert at a much higher rate, because expensing something is easier. Neil Davidson, joint CEO at Red Gate Software Ltd and author of Don’t Just Roll the Dice (Red Gate Books), says, “One of the biggest misconceptions around pricing is that what you charge for your product or service is directly related to how much it costs you to build or run it. That’s not the case. Price is related to what your customers are prepared to pay.” Case study | Socialight Discovers the underlying Metrics of Pricing Socialight was founded in 2005 by Dan Melinger and Michael Sharon, and sold to Group Commerce in 2011. The idea came from work Dan was doing in 2004 with a team at NYU focused on how digital media was changing how people communicated. This was in the early days of social networking: Friendster was the dominant social platform. Socialight’s first incarnation was as a destination social network for Java-enabled mobile phones, which were considered the pinnacle of mobile app technology at the time. People could place “sticky notes” around the world, and then collaborate, organize, and share them with friends or the community as a whole. Back then, Dan wasn’t focused on pricing, but shortly after launching Socialight, the founders realized that power users were looking for different feature sets based on how they were using the product. “The mobile software market was starting to mature, along with locationbased services and devices like iPhones,” said Dan. “We also started getting approached by companies that wanted to pay for us to build and host mobile and social apps for them.”

This started the company’s pivot from B2C to B2B. It built an API to let others build their own applications, and then built a more advanced mobile app-maker product. This achieved good traction, with over 1,000 communities built atop it. As Socialight moved into the B2B space, it launched a three-tiered freemium business model. The two paying tiers were called Premium and Pro, and cost $250 and $1,000–$5,500 per month, respectively. The main difference between the Premium and Pro offerings was the amount of involvement Socialight had with those customers—at $1,000–$5,500 per month, Socialight was very involved with lots of hours invested per month to work with customers. Four months into its freemium launch, the company realized there was a problem. While the Pro customers were great for top-line revenue, they were costing Socialight a lot of money. “We realized that the margins we were getting from Pro customers were nowhere near as good as those from Premium, even though the revenue from Pro customers was great. Moreover, Pro customers took a lot longer to close, which is not something we understood well enough early on,” says Dan. This is where a greater understanding and sophistication around pricerelated metrics becomes so important. Tracking revenue by pricing tier, which Socialight did from the outset, is a good place to start. But the other fundamental business metrics are perhaps even more important. For example, Socialight could have focused on customer acquisition cost versus customer lifetime value to identify its revenue and cost problems. Or it could have focused on margins earlier in the process, which would have helped identify its revenue issues. Eventually, the company increased the Pro tier to $5,500/month exclusively, a reflection of the increased support required by customers. Socialight never got around to experimenting with different pricing strategies (it was acquired, after all!), but Dan would have liked to. “I think we could have reduced the Pro feature set a small amount and reduced its pricing significantly,” he says. This underscores the tricky balance in a freemium or tiered pricing model: how do you make sure that the features/services being offered fit into the right packages at the right price? Instead of looking at pricing, Dan was able to experiment with other metrics. He looked for ways to encourage customers using the free service to convert to the Premium tier (and focused a lot less on the Pro tier). The focus on conversion (from free to paid) helped Socialight grow its business and get the bulk of its paid users into the profitable tier.

Summary • Socialight switched from a consumer to business market, which required a change in pricing. • The founders analyzed not only revenue, but also the cost of service delivery, and realized that high-revenue customers weren’t as profitable. • They intentionally priced one of their tiers unreasonably high to discourage customers from buying it while still being able to claim it publicly.
Analytics Lessons Learned Consider the impact that pricing has on customer behavior, both in terms of attracting and discouraging them. Price is an important tool for getting your customers to do what you want, and it should always be compared not only to cost of sales, but also to cost of goods sold and marginal cost.
Research on price elasticity suggests that it applies most in young, growing markets. Think about getting a walk-in haircut, for example. You may not check how much the haircut is; you know it’ll be within a certain price range. If the stylist presented you with a bill for $500, you’d be outraged. There’s a well-defined expectation of pricing. While startups often live in young, growing markets where prices are less established, bigger, more stable markets are often subject to commodity pricing, regulation, bulk discounts, long-term contracts, and other externalities that complicate the simplicity of the elasticity just described. Your business model will affect the role pricing plays for you. If you’re a media site, someone is already optimizing revenue for you in the form of ad auctions. If you’re a two-sided marketplace, you may need to help your sellers price their offerings correctly in order to maximize your own profits. And if you’re a UGC site, you may not care about pricing—or may want to apply similar approaches to determine the most effective rewards or incentives for your users. In a study of 133 companies, Patrick Campbell found that most respondents compared themselves to the competition when setting pricing, as shown in Figure 21-2. Some simply guessed, or based their price on the cost plus a profit margin. Only 21% of respondents said they used customer development.

While it might seem like getting pricing right is a team effort, the reality across these respondents was that the founder ultimately decided final pricing, as shown in Figure 21-3.

Despite the number of testing tools available to organizations that want to get serious about pricing, few companies did much more than check out the competition. As Figure 21-4 shows, only 18% did any kind of customer price sensitivity testing.

Ultimately, what Patrick’s research shows is that despite the considerable rewards for getting pricing right, most startups aren’t looking at real data—they’re shooting from the hip.
Bottom Line There’s no clear rule on what to charge. But whatever your choice of pricing models, testing is key. Understanding the right tiers of pricing and the price elasticity of your market is vital if you’re going to balance revenues with adoption. Once you find your revenue “sweet spot,” aim about 10% lower to encourage growth of your user base.
Cost of Customer Acquisition While it’s impossible to say what it’ll cost to get a new customer, we can define it as a percentage of your customers’ lifetime value. This is the total revenue a customer brings to you in the life of her relationship with you. This varies by business model, so we’ll tackle it in subsequent, modelspecific chapters, but a good rule of thumb is that your acquisition cost should be less than a third of the total value a customer brings you over her lifetime. This isn’t a hard-and-fast rule, but it’s widely cited. Here’s some of the reasoning behind it. • The CLV you’ve calculated is probably wrong. There’s uncertainty in any business model. You’re guessing how much you’ll make from a customer in her lifetime. If you’re off, you may have spent too much to acquire her, and it’ll take a long time to find out whether you’ve underestimated churn or overestimated customer spend. “In my experience, churn has the biggest impact on CLV, and unfortunately, churn is a lagging indicator,” says Zach Nies. He suggests offering only

month-to-month subscription plans initially in order to get a better picture of true churn early on. • The acquisition cost is probably wrong, too. You’re paying the costs of acquiring customers up front. New customers incur up-front cost— onboarding, adding more infrastructure, etc. • Between the time that you spend money to acquire someone and the time you recoup that investment, you’re basically “lending” the customer money. The longer it takes you to recoup the money, the more you’ll need. And because money comes from either a bank loan or an equity investor, you’ll either wind up paying interest, or diluting yourself by taking on investors. This is a complex balance to strike. Bad cash-flow management kills startups. • Limiting yourself to a customer acquisition cost (CAC) of only a third of your CLV will force you to verify your acquisition costs sooner, which will make you more honest—so you’ll recognize a mistake before it’s too late. If your product or service costs a lot to deliver and operate, you may not have the operating margins to support even a third, and you may have to lower your CAC to an even smaller percentage of CLV to make your financial model work. What really drives your acquisition costs is your underlying business model. While there may not be an industry standard for acquisition, you should have some target margins that you need to achieve, and the percentage of your revenue that you spend on acquisition drives those margins. So when you’re deciding what to spend on customer acquisition, start with your business model.
Bottom Line Unless you have a good reason to do otherwise, don’t spend more than a third of the money you expect to gain from a customer (and the customers she invites downstream) on acquiring that customer.
Virality Recall that virality is actually two metrics: how many new users each existing user successfully invites (your viral coefficient) and the time it takes her to do so (your viral cycle time). There’s no “normal” for virality. Both metrics depend on the nature of your product, as well as market saturation. A sustained viral coefficient of greater than 1 is an extremely strong indicator of growth, and suggests that you should be focusing on stickiness so you can retain those new users as you add them. But even a lower viral coefficient is useful, because it effectively reduces your customer acquisition

cost. Imagine that it costs you $1,000 to acquire 100 new users. Your CAC is therefore $10. But if you have a viral coefficient of 0.4, then those 100 users will invite 40 more, who will in turn invite an additional 16, and so on. In the end, those 100 users are really 165 users. So your CAC is actually $6.06. Put another way, virality is a force multiplier for your attentiongenerating efforts. Done right, it’s one of your unfair advantages. It’s also critical to distinguish between artificial virality and inherent virality. If your service is inherently viral—meaning that use of the product naturally involves inviting outsiders, as it does with products like Skype or Uberconf—the newly invited users have a legitimate reason to use the product. A Skype user you invite will join in order to get on a call with you. Users who join in this way will be more engaged than those invited in other, less intrinsic ways (for example, through a word-of-mouth mention). On the other hand, if your virality is forced—for example, if you let people into a beta once they invite five friends, or reward people with extra features for tweeting something—you won’t see as much stickiness from the invited users. Dropbox found a clever way around this, by looking inherent and giving away something of value (cloud storage) when it was in fact largely artificial. People invited others because they wanted more space for themselves, not because they needed to share content. Only later did the company add more advanced sharing features that made the virality more inherent. Don’t overlook sharing by email, which, as mentioned in Chapter 12, can represent nearly 80% of all online sharing, particularly for media sites and older customers.
Bottom Line There’s no “typical” virality for startups. If virality is below 1, it’s helping lower your customer acquisition cost. If it’s above 1, you’ll grow. And if you’re over 0.75, things are pretty good. Try to build inherent virality into the product, and track it against your business model. Treat artificial virality the same way you would customer acquisition, and segment it by the value of the new users it brings in.
Mailing List Effectiveness Mailing list provider MailChimp shares a considerable amount of data on how well mailing lists work.* Mailing list open rates vary widely by

industry.* A 2010 study showed that construction, home and garden, and photo emails achieve nearly 30% open rate, but emails related to medicine, politics, and music get as little as 14%. And these are legitimate messages for which recipients have ostensibly signed up—not spam. There’s plenty you can do to improve your email open rate. Targeting your mailings by tailoring messages to different segments of your subscriber base improves clicks and opens by nearly 15%. Email open rates change significantly based on the time of day—3 p.m., as it turns out, is when people are most likely to open something. Few people open emails on the weekend. More links in an email means more clicks. And newer subscribers are more likely to click on a message. Jason Billingsley recommends testing an individualized send schedule equal to the signup time of the unique user. So, if a user signs up at 9 a.m., schedule to send her updates at 9 a.m. “Most email tools aren’t set up for such a tactic, but it’s a highly valuable test that could yield significant results,” he says. But by far the biggest factor in mailing list effectiveness is simple: write a decent subject line. A good one gets an open rate of 60–87%, and a bad one suffers a paltry 1–14%.† It turns out that simple, self-explanatory messages that include something about the recipient get opened. Sometimes it’s just one word: Experian reported that the word “exclusive” in email promotional campaigns increased unique open rates by 14%.‡ François Lane, CEO of mailing platform CakeMail, has a few additional cautions that underscore how email delivery metrics are interrelated: • The more frequently you email users, the lower your bounce and humanflagged spam rates (because those addresses quickly get removed from the list), but frequent emailing also tends to reduce engagement metrics like open rate and click-through rate, because recipients get email fatigue. • A higher rate of machine-flagged spam leads to a lower rate of humanflagged spam, because humans don’t complain about mail they don’t receive. • Open rate is a fundamentally flawed metric, because it relies on the mail client to load a hidden pixel—which most modern mail applications

don’t do by default. This is one of the main reasons newsletter designers focus on imageless layout. Open rates are mainly useful for testing subject lines or different contact lists for a single campaign, but they provide only a sample, and at best a skewed one.
Bottom Line Open and click-through rates will vary significantly, but a well-run campaign should hit a 20–30% open rate and over 5% click-through.
uptime and Reliability The Web isn’t perfect. A 2012 study of static websites running on 10 different cloud providers showed that nearly 3% of tests to those clouds resulted in an error.* So even if your site is working all the time, the Internet and the underlying infrastructure will cause problems. Achieving an uptime of better than 99.95% is costly, too, allowing you to be down only 4.4 hours a year. If your users are loyal and engaged, then they’ll tolerate a small amount of downtime—particularly if you’re transparent about it on social networks and keep them informed.
Bottom Line For a paid service that users rely on (such as an email application or a hosted project management application), you should have at least 99.5% uptime, and keep users updated about outages. Other kinds of applications can survive a lower level of service.
Site Engagement Everyone cares about site engagement (unless you’re exclusively mobile, but even then you likely have a web presence driving mobile downloads). In some cases (such as a transaction-focused e-commerce site), you want site visitors to come onto your site and engage quickly, whereas in other cases (such as a media site that monetizes via ads), you want visitors spending as much time as possible. Analytics firm Chartbeat measures page engagement across a multitude of sites. It defines an “engaged” user as someone who has a page open and has scrolled, typed, or interacted with the page in the last few seconds. “We generally see a separation between how much engagement sites get

on landing pages—which typically get high traffic and low engagement— and other pages,” says Joshua Schwartz, a data scientist with the company. “Across my sample of sites, average engaged time on landing pages was 61 seconds and on non-landing pages it was 76 seconds. Of course, this varies widely between pages and between sites, but it’s a reasonable benchmark.”
Bottom Line An average engaged time on a page of one minute is normal, but there’s wide variance between sites and between pages on a site.
Web Performance Study after study has proven that fast sites do better across nearly every metric that matters, from time on site to conversion to shopping cart size.* Yet many web startups treat page-load time as an afterthought. Chartbeat measures this data across several hundred of its customers who let the company analyze their statistics in an anonymized, aggregate way.† Looking at the smaller, lower-traffic sites in its data set, the company found that these took 7–12 seconds to load. It also found that pages with very slow load times have very few concurrent users, as shown in Figure 21-5.

“There seems to be a hard threshold at about 15–18 seconds, where after that users simply won’t wait, and traffic falls off dramatically,” says Joshua. “It’s also notable that the largest sites in our sample set, those with thousands of concurrents, had some of the fastest page load times—often under five seconds.”
Bottom Line Site speed is something you can control, and it can give you a real advantage. Get your pages to load for a first-time visitor in less than 5 seconds; after 10, and you’ll start to suffer.
exerCise | Make your Own Lines in the Sand In this chapter and the next six chapters, we share lines in the sand, or baselines, for which you can aim. You should already have a list of key metrics that you’re tracking (or would like to track). Now compare those metrics with the lines in the sand provided in the following chapters. How do you compare? Which metric is worst off? Is that metric your One Metric That Matters?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: