Four Mistakes that Undermine Learning Through the Lean Canvas

The lean canvas enables rapid testing of business model assumptions to find problems worth solving and a repeatable business model or, at worst, to quickly determine if pursuing an idea is a waste of time and money. For this, assumptions on the canvas need to be in a testable format so that they can be validated or invalidated.

All models are wrong, but some are useful.
— George E.P. Box

Every lean canvas is wrong in that it does not reflect the exact reality in how a business will create, deliver, and capture value. However, a lean canvas is useful by providing a framework of critical assumptions to test. Lean canvases are less useful when they:

  • Are overloaded with non-essential assumptions

  • Use vague assumptions that are impossible to refute and don’t generate meaningful insights

  • Contain assumptions written in a format that cannot be validated

Untestable canvases result in entrepreneurs spending critical time on work that does not further building a repeatable business model or pursuing an idea that has no merit. We need to get better at validating ideas quickly by having testable assumptions.

Below are common mistakes we innovators make when outlining business models on the lean canvas:

Mistake One - Early adopter characteristics are not written in a measurable format

I often find that the early adopter characteristics provided are not in a measurable format. For example, a business-to-business (B2B) startup may list large companies as an early adopter but the definition of a large company is not clear. Likewise, a business-to-consumer (B2C) startup might put something like music lover as the early adopter but it is not clear how that is measured. This is a problem because it makes it impossible to validate whether a customer is an early adopter or categorize interviews based on key criteria.

Behaviours, interests, lifestyles, and values are very important to providing a clear picture on who is an early adopter but can be seemingly difficult to measure. I recommend using proxies: a measurable stand-in for who is our ideal customer. For example, for a characteristic such as “large companies,” we can use number of employees, revenue, locations, or another appropriate metric that denotes size and drives at the aspect that makes large companies a likely early adopter. For consumers, a characteristic such as music lover can be measured using a proxy such as number of concerts attended in a year, attended music festival in the past year, hours of music listened to each week, or a music purchase pattern. Even emotions or personal characteristics such as being anxious can be measured using a self-reported response on a Likert scale of the degree to which a user agrees with the statement “I am an anxious person.” All proxies need to be validated through user testing to find the metrics with the strongest correlation to need.

Having an early adopter profile in a measurable format makes it easier to identify whether someone is an early adopter and helps determine the characteristics that drive the need for a solution.

In Figure 1, I compare an untestable section with a best practice section for a wearable and app that measures fatigue. The best practice section uses “employ 200+ heavy equipment operators” as the characteristic for size rather than “large company” because we assume the number of employees in that role drive the need. Other drivers of need are the injury rate for fatigue related injuries and the likelihood that workers commute from off-site as they have more discretion as to how they spend their free time.

Untestable

Best practice

Mistake Two - Problems are too high-level and vague

Another common pitfall is to list problems that are high-level statements about an opportunity to provide additional value or missing benefits. In figure 2, example problems on the untestable section include high cost to workplace injuries or difficult to know how tired the workforce is. For a B2C example, a high-level problem for a dating app would be it is difficult to meet new people. These problems are untestable because they are vague and hard to refute. They are also problematic when conducting problem interviews because vague problems tend not to provide useful insights into how the customer experiences the problem.

It is best to define specific root causes of a problem as they provide better user feedback and enable us to create solutions. In Figure 2, self-reported employee data on fatigue is not accurate or reliable and not able to assess worker engagement with health and safety information on fatigue are specific problems that are easy to discuss. Using the dating app example, don’t start conversations with someone attractive for fear of rejection is concrete and gets at the root cause of what makes it difficult for some to meet new people. Using a technique such as the Five Whys helps get to the root cause of problems.

Untestable

Best practice

Mistake Three - Solution is a summary of concept and benefits

A common pitfall is to put a general statement about the solution like the one in figure 3 of the untestable section, “a wearable and app that uses AI to assess the fatigue in all employees.” Such vague statements are difficult to test. It is best to define the individual features that will solve the corresponding problems listed on the canvas. Having these critical assumptions about the product’s features explicit enable us to begin validating our product’s design in solution interviews. It will also help us to prioritize the features of a minimum viable product.

A similar pitfall is the tendency to list the benefits of the solution rather than the features. Benefits are hard to refute and therefore less testable. It best to keep them as the starting point of developing a value proposition.

In figure 3, the best practice solution section states a feature for each of the problems listed rather than list key benefits.

Untestable

Best practice

Mistake Four - Vague Descriptions of Channels

Channels are the paths that we build to early customers. Listing general channel categories such as social media, blogs, or online sales are too vague to be validated/invalidated. Users and their behaviour are very different on each social media platform (e.g. Instagram, Twitter, LinkedIn). Therefore, it is important to specify which social media platforms will be used as channels based on the characteristics and behaviour of the early adopters. A good place to start is to consider and then test where do my early adopters go when trying to figure out how to solve these problems?

Be careful not to list assumptions about where transactions occur such as online sales or website as those are not channels. Channels are the how users and customers find your website or product.

In figure 4, note that on the best practice section there is specificity as to the targeting of blogs and rather than using a general category such as online advertising there is a specific assumption of Google ads.

Untestable

Best practice

Summary

Putting it all together, below is the comparison between an untestable and best practice canvas. It is not how much information is on the canvas that is useful but rather having all the critical assumptions in a clear and testable format so that they can be validated and invalidated.

Untestable canvas

Best practice canvas

Previous
Previous

How to Create Value Propositions that Win Early Sales

Next
Next

Why and How to Model a Social Business on the Lean Canvas