Validated Learning of Lean Startups: The 3 A's

Start-up Companies
Knowledge Center

Forum

New Topic

Hong Sun
Management Consultant, Canada

Validated Learning of Lean Startups: The 3 A's

Entrepreneurs embracing the Lean Startup-philosophy essentially adapt the ideas of Lean Manufacturing (small batch sizes (KANBAN), JIT, shortening cycle times, decreasing waste) to the continuous improvement of the offerings of their startup.

Validated Learning

Split Testing (A/B Testing) and certain metrics are playing a crucial role in so called validated learning. Validated Learning is a "rigorous method for demonstrating progress when one is embedded in the soil of extreme uncertainty in which startups grow. It is the process of demonstrating empirically that a team has discovered valuable truths about a startup's present and future business prospects" (Ries p. 38). It is based on the assumption that some piece of learning / improvement work is incomplete, if not completely useless, if the company does not VALIDATE that the "improvement" indeed brings real benefits to the users, for which they are willing to pay.

3 Characteristics of Metrics to Guide the Progress of Startups

What are the best metrics for startups to measure whether they are simultaneously gaining validated learning and making real progress in their endeavors to test ideas, attract customers, and scale up? According to Eric Ries, the author of "The Lean Startup," the best metrics must feature 3 A's—Actionable, Accessible, and Auditable:
  1. Actionable
    An actionable metric is able to demonstrate perfect cause and effect, which makes it clear what actions would be necessary to replicate the good results and avoid the bad ones. Its opposite is a vanity metric that cannot be acted upon. Take, for instance, the number of hits to Company X's website. If the company has a million hits this month, what needs to be done to get more hits? Well, It's hard to tell since it depends on too many factors such as where the hits come from (from a million new customers or from a few extremely active web browsers), whether the hits are the result of a new marketing campaign, etc. In other words, no clear cause and effect is disclosed in such an over general metric, thus no action can be taken for better results.

    Furthermore, the ambiguous information provided by vanity metrics often triggers discordance within a company: when numbers go up, people conveniently attribute the improvement to their own actions, hence the frequent fight for credit and rewards between different teams; when numbers go down, it automatically becomes someone else's fault, forcing departments to develop their own jargons and culture as defense mechanisms against other departments.

    Actionable metrics are the antidote to the problems caused by vanity metrics. When cause and effect is clearly understood, there wouldn't be any fight for credit or finger-pointing game; when an unmistakable and objective assessment is given, people are better able to take actions and learn from them. For example, if the metric used in Company X is the number of people who visited its website during the week before a specific marketing campaign, compared with the number during the week after the campaign, the marketing team would have a more solid idea of the effectiveness of their campaign, and be able to decide whether to continue or stop it.

  2. Accessible
    Too many metric reports are not understood by people who are supposed to use them in decision making. On top of that, many managers and departments spend lots of energy learning how to use data to get what they want to show rather than as genuine feedback to guide their future actions. Fortunately, there is a counter measure to this misuse of data—making metrics accessible.

    The easiest way to make reports as simple as possible so that everyone can comprehend them is to use tangible, concrete units. E.g. what is a website hit? Does a web page count as one hit or many hits since each page can contain lots of image files? The ambiguity involved in the definition leads to endless debate. But there's no haziness in interpreting the number of people visiting the website: one can practically picture those visitors sitting at their computers and count them one by one.

    Accessible learning metrics are most commonly presented in cohort-based reports and analyses: instead of looking at cumulative totals such as a company's overall customer count or total number of website hits, one looks at the performance of each group of customers (each group is called a cohort) that come into contact with a product independently. E.g. how many customers offered feedback within two weeks after they purchased a specific product? Or among people who visited a certain product's webpage, how many of them set order for the product immediately? Cohort analyses provide critical information about a company's customers and their actions; such information is not only easy to understand, but also actionable.

    Accessibility also refers to widespread access to the reports. For instance, reports containing the latest data for all the experiments and the related metrics in a company are made available on the company's website, accessible to anyone with an employee account; each employee can log in to the system at any time, choose from a list of all current and past experiments, and see a well laid out and easy to read one-page summary of the results.

  3. Auditable
    When something bad happens to a project, the owner of the project is tempted to challenge the veracity of the data indicating the problems. Such challenges are more common than most managers would like them to be, but most data reporting systems are not designed to answer them well by providing supporting documents, either as a result of the desire to protect customers' privacy or simply due to negligence. That's why the third A of good metrics—"auditable"—is so essential, which emphasizes that the data must be highly credible to everyone.

    The solution? First, producers of reports need to test all the data by hand, by talking to customers in the messy real world; managers also need to be able to spot check the data with real customers. This is the only effective way to verify if the reports contain true facts and are hence auditable. Besides enhancing auditability of data in the system, talking with real customers can also give managers and entrepreneurs critical insights into why customers are behaving the way the data indicate.

    Second, people building reports must make sure that the tools used for generating reports are not too complex. Whenever possible, users should be able to draw reports directly from the master data rather than from an intermediate system, which reduces possibility of technical error. The truth is that every time when a team fail in a project due to a technical mistake with the data, the team's confidence, morale, and discipline will also be weakened.
Learn more: Critical Success Factors and Key Performance Indicators | Split Testing (A/B Testing) | Start-up Companies | Business Incubators.

Source: Ries, E. (2011). The Lean Startup: How Today's Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses. The Crown Publishing Group.

Start a new forum topic

 

More on Start-up Companies:
Summary
Special Interest Group

Do you have a keen interest in Start-up Companies? Become our SIG Leader

Start-up Companies
Knowledge Center



About 12manage | Advertising | Link to us / Cite us | Privacy | Suggestions | Terms of Service
© 2021 12manage - The Executive Fast Track. V15.8 - Last updated: 27-9-2021. All names ™ of their owners.