An Experience Managment Manifesto

Organizations generally realize that they must manage Customer Experience to create meaningful customer value. Yet too often organizations align the Product Manager role solely with product development efforts, trapping those who occupy the role in a too-narrow, “build-it and they will come” mindset. In the process, critical aspects of how value is created for customers are missed as Product Managers furiously work to develop and launch products. But value delivered, in the eyes of the customer, encompasses every aspect of the customer journey- from first awareness through satisfied testimonial.

Customers don’t buy products- they buy the total experience of a solution.

A bit of history here is informative. The mindset behind managing the value delivered to customers had its first evolution in the 1920’s when the concept of “Brand Management” was introduced by Proctor & Gamble. Brand Management’s marketing orientation stemmed from the growing opportunity that mass marketing offered and from the realization that customers can form attachments to the image and promise that products represent in their minds. Later, with ever increasing levels of technology came the need to manage this technical complexity and a concomitant need for rapid product innovation. This yielded the more technology-oriented “Product Management” role. This role has typically focused strongly on understanding the customer needs that should drive product enhancement and new product development, maintaining a particular focus on creating user satisfaction with the product itself.

But today, with extraordinary levels of competition, easy vender substitution, ever growing customer impatience, and increasing expectations for ease and speed in every way imaginable, companies must more fully embed a wholistic view of how they deliver value. While the product is clearly the center point of the value we deliver to customers, it is their total experience that determines the customer’s willingness to purchase, to recommend, and to repurchase. Today, there are no successful products. There are only successful customer experiences that include products. Which means that Product Managers whose eyes focus only on the product are missing the point.

The implications for companies and for the Product Management discipline are clear. We must maintain a total solution focus because what must be managed is not just our image or our widgets. What must be managed is what the customer experiences from first awareness through product use through satisfied testimonial. If product plans don’t include a mapping of how you will satisfy customers throughout their journey, you’re back to the outmoded “build it and they will come” mindset. They won’t. Which means that we can no longer be product organizations. We must be customer experience organizations.

For leaders this means that the people within your organization who manage products must be given a broad remit to consider, create, and adjust all the diverse elements that contribute value to the customer experience. Silos must be bridged, and that means that Product Managers with this responsibility must be T-shaped- broad in perspective and knowledge across disciplines, not just deep in domain knowledge. The Product Manager role re-envisioned as “Experience Manager” embraces these things.

To pull it off, organizations must establish a total customer experience orientation and get out of the pure product mindset. This means that current, internal product discussions must focus not just on the features and benefits of the product, but on how cradle-to-grave value is created for the customer through any new initiative. It means mapping the future-state customer experience you plan to create, not just laying out a product roadmap. And of course, leaders must hire and train those T-shaped people to fulfill the role, while putting the necessary mechanisms in place to bridge the silos. Yes, it’s asking a lot.

But again:

Today there are no successful products. There are only successful customer experiences that include products.

Stop Fighting - Pick a Prioritization Method and Stick with It

Is the process of prioritizing features and initiatives a battle or a pleasure?

Don’t answer. I know…

The Problem

The prioritization process is usually based upon opinion, customer anecdotes, or the loudest voice in the room or, instead, upon “objective” criteria which flux according to opinion, customer anecdotes, and loud voices.

The Solution

When you establish a fixed methodology for prioritization with buy-in from all stakeholders, argumentation gives way to discussion and quantification. The question is: which prioritization system is right for us? Allow me to detail some alternatives.

Prioritization Systems

You need a prioritization methodology to rank the potential initiatives or products in which you will invest (or not) and, at a more granular level, to rank the features planned for a product. For purposes of this article, I’ll focus primarily on the latter case- feature prioritization.

I put prioritization schemes into three, broad categories:

Standard Criteria Models

These utilize the same evaluative criteria regardless of what company is employing the scheme. With these systems a priority number is calculated for each feature according to those criteria. Examples I discuss include RICE and WSJF.

Classification Schemes

Though they are often identified as such, these are not actually prioritization models per se. However, they do offer useful ways of categorizing features. Here I’ll discuss MoSCoW and KANO.

Custom Criteria Models

This approach allows your organization to establish a customized set of stable criteria that suit the organization’s particular product category, market, business priorities, company culture, or other factors. As with Standard Criteria models, a priority number is calculated for each feature. Spoiler- this is the one I like best.

Standard Criteria Models

RICE

RICE is favored by many product teams for its simplicity. It assesses the customer/market value of the feature, then qualifies that value using a realistic idea of its development cost. For each feature, numbers are assigned to each criteria with the priority score of the feature determined by this formula:

(Reach X Impact X Confidence) / Effort

Reach (How many will this impact) This could be measured as customers per quarter, or transactions per month, or another measure just so long as the same measure is used for all features.

Impact (The level of impact the feature is expected to have for its intended purpose) Rated on a 1-5 scale. Whether the feature is designed to increase adoption, speed workflow, simplify product use, add a use case to the product, or another purpose, the rating signifies how well we expect it to accomplish that.

Confidence (How confident are we in our estimates of cost and value) Rated 0-100%. This criterion helps tamp-down our natural enthusiasm for exciting concepts. If we are not very confident in our assumptions, the priority rating goes down.

Effort (Your current estimate of cost) This can be a monetary value, labor hours, or early-on even t-shirt sizes. However, you must use the same measure across all features being prioritized.

WSJF

Weighted Shortest Job First is similar to RICE but adds a time factor to its calculation. Many teams using Agile software development favor this approach because it pushes the more time critical features forward in the development backlog.

Like RICE, numbers are assigned to each criteria with the priority score of the feature determined by a formula:

(User-Business Value + Time Criticality + Risk Reduction or Opportunity Enablement) / Job Duration

Another advantage of WSJF is that it lets you define the specific meanings of those standard criteria to best suit your business. Here are those criteria:

User-Business Value (Combines your assessment of value to customers and to your business) Rated on 1-10 scale. As mentioned, your team gets to define just what lower or higher scores indicate for your business. For example, you might specify that a rating of 1-3 means that: few customers will be impacted by the feature, that it offers only a moderate value proposition, and that it only provides incremental value to your business. By contrast you might specify that a rating of 8-10 means that: the feature will impact most of your customers, delivering high value that is tied to key business strategies.

The tuning of what each criteria means lets the model better accommodate the particular contexts of your business. Discuss with all the relevant stakeholders, get consensus, and make a reference chart of these rating definitions.

Time Criticality (Boils up your assessment of factors like the existence of fixed deadlines, customer urgency, current impacts, etc.) Rated on a 1-10 scale. Here you might specify that a low score means the feature is not urgent, while a high score indicates that it must be completed ASAP (e.g.: prompted by regulatory requirements)

Risk Reduction or Opportunity Enablement (Rather than assessing direct customer or business impact, this factor represents your evaluation of how future risk and opportunity are impacted.) Again, this is rated on a 1-10 scale. You might specify that a low score indicates that the potential opportunity impacted is vague at best, while a high score signifies that it will very positively impact well-defined opportunities or risks.

Job Duration (How long will it take) This too is rated on a 1-10 scale, so it is important to be clear about the range of duration you want this rating to represent. For example, you might select weeks, or perhaps sprint cycles as your measure of duration. For clarity, you must specify how many duration units are represented at each level of the 1-10 scale.

When you do the math, those features with higher User-Business Value, Time Criticality, and Risk Reduction/Opportunity Enablement that also require less time (Job Duration) will score highest. The system prioritizes the more important features that you can bring to market more quickly. Hence the name: Weighted, Shortest Job First.

NOTE: Both of these standard criteria schemes use cost as the denominator in the formula. In this way they risk elevating cheap features to the top of the priority list despite the low customer or business value those features might actually offer. Something to keep in mind when choosing a prioritization approach.

Classification Schemes

I’d like to touch on a couple, popular “prioritization schemes” that are not actually prioritization schemes. These serve to classify features instead, but those classifications can be useful in the prioritization process.

MoSCoW

In the case of MoSCoW, you simply categorize features into one of a four buckets based upon your assessment of those feature concepts. As with WSJF, you must first specify and agree what each of these buckets means. Then start bucketing:

Must have (These are things that are necessary to a functioning product and thus not debatable.)

Should have (These things add value for customers and/or the business, so you’d really like to have as many of these in the product as possible.)

Could have (You can think of these as things what would be nice-to-have. They add value but not at the level of the Should haves.)

Won’t have (This is not an assessment of value, but rather, a way of identifying those things that will not be included in the release being planned for any number of reasons.) “Won’t have” serves to clarify this to all and to prevent scope creep. But these features will likely surface again in planning around future releases.

Again, MoSCoW is a classification scheme that produces no prioritization ranking. So, very likely, the features designated as Should haves and Could haves will next need to be put through a separate prioritization scheme to rank those things.

KANO

The core of the KANO model is its classification scheme which, similar to MoSCoW, buckets features into categories. However, KANO also offers an analytical, survey methodology that can be used to directly collect customer sentiment. Of course, this offers only the customer’s perspective, not your relevant business goals and parameters, thus KANO can’t serve as your sole prioritization scheme any more than can MoSCoW.

The category buckets used by KANO include:

Must haves (Those things without which you really don’t have a product.) All must be included in the product.

Linear (Those things which we know customers want from our discussions with them.) The more of these, the better.

Delighters (Things that we believe will surprise and delight customers with unexpected value.) These are few and far between. But when you have one in hand it’s worth great sacrifice and cost to get it into the product because these are the capabilities that are most differentiating and that create the most buzz with customers.

Indifferent (Those things about which customers don’t care much.) Nuf said.

Reverse (These are product features or attributes that actually diminish the customer’s assessment of the product.) This is a useful category because it lets us identify things that are not simply ranked low, but which actually hurt the value of the product in the customer’s eye. Sometimes these turn out to be the brilliant features we’ve invented and are passionate about. Nevertheless, they have to go.

KANO’s buckets can be used as such to categorize planned product capabilities, but like MoSCoW, the Linear features will still need a secondary prioritization to establish rank.

About that KANO survey methodology… I’ve found it difficult for customers to get through a KANO survey because it asks customers to rate each feature by selecting from a set of brain-twisting options:

  • I LIKE to have this capability

  • I LIKE NOT to have this capability

  • I EXPECT to have this capability

  • I EXPECT NOT to have this capability

  • I am NEUTRAL

  • I can LIVE with this capability

  • I can LIVE WITHOUT this capability

  • I DISLIKE having this capability

  • I DISLIKE NOT having this capability

It’s taxing for customers to work through this. But regardless of whether or not you use the survey approach, the KANO classifications can be valuable when considering product priorities purely from the customer’s standpoint.

Custom Criteria Model

Which brings me to my favorite prioritization methodology. I like this one because you decide just which criteria best suit your organization’s particular product category, market, business priorities, company culture, and other factors. Here’s an example matrix.

Like the standard criteria methods, you assign ratings to a set of criteria to arrive at a score for each feature. But you pick the criteria to be used and, since not all criteria are equally important, you may also weight the value of each criteria relative to each other.

Importantly, implementing a custom criteria matrix requires a significant set-up step. You must reach consensus with all the relevant stakeholders concerning which criteria are to be used in the scheme. Those stakeholders (Product, Marketing, Engineering, Sales, Finance, Business, other) will each bring their own perspective and priorities to the discussion. These discussions can be quite animated because opinions, anecdotes, and loud voices will appear. But in the end, consensus can be reached concerning the most valuable criteria to use, and with that, half the battle is won.

Part of the beauty of this system is that the criteria options are virtually unlimited, allowing the organization to reach consensus on a set of factors well-tuned to their context. Yet, consensus does not mean that we include every criteria that anyone wanted. Typically, if more than a half-dozen criteria are employed the subsequent feature rating process becomes too complicated to manage. So, the process for selecting criteria must be a very selective process, which means employing everyone’s best critical thinking in a thoughtful discussion.

Here are some criteria options to choose from that have proven valuable:

Of course, these are not the only choices. Create whatever criteria best suit your business. Note that, as with WSJF, you need to specify just what the rating scale is measuring and what each rating means (as shown above.)

Some teams like to include a column in the matrix for the exact estimated cost of each item, then use this as a denominator for the sum of all the other criteria. Like the RICE and WSJF models, this puts cost in a dominant role for prioritization. Particularly when resources are very constrained this may be appropriate. However, as previously noted, this approach can artificially elevate low priority features simply because they are cheap to execute. A release that contains loads of cheap but low value items is not one that customers will appreciate nor will differentiate you from competition.

Conclusion

In the end, the prioritization methodology you utilize is a choice that all the relevant stakeholders need to buy into because you all need to stick with what you’ve picked for a long time. It’s well worth enduring all the opinions, anecdotes, and loud voices while you deliberate which methodology is right for your organization because that methodology will be the very mechanism to quiet those things ongoing.

Next

Rationalize Product Initiatives using Opportunity Solution Trees

The Problem

Product enhancements are too often a scattershot affair propelled by internal brainstorms or customer demands. These enhancement opportunities and the solutions they spawn frequently go without necessary validation or meaningful alignment to business goals.

The Value Proposition

The Opportunity Solution Tree (OST) is an approach to help you analyze and act upon the ever-evolving insights that emerge from continuous discovery efforts while rooting your actions in business objectives. It’s a great tool for continuous product improvement.

(Continuous Discovery refers to your ongoing collection of customer insights via interviews, observation, data analysis, etc. that identifies customer’s ever-evolving needs, issues, and preferences which, in turn, offer new Opportunities to improve and grow your product/business.)

Detailed in the book: Continuous Discovery Habits by Teresa Torres, OSTs are particularly valuable in digital products due to the category’s speed of product innovation and the ability to incrementally deliver new value to customers (in the form of new/improved features). We’ll focus on a digital product example here.

Steps for Building an OST

1. Identify Business Objectives Related to the Product

(Desired business results)

Typically: Revenue; Growth; Churn; Profit; Costs; Customer Satisfaction; etc.

Tip: Since the Opportunity Solution Tree is a product-specific tool, the business objectives identified must be factors that the target product can impact.

Example:

2. Establish Specific Product Outcomes to be Accomplished

(The improved results in the product that will drive those business objectives)

Typically: Overall usage; Usage of particular features; Workflow improvements; Support for new use cases; Increased efficiencies; Reduced time to complete tasks; Higher assessed quality of product components; Process changes; Improved customer satisfaction measures; etc.

Note that, in practice, steps 2 and 3 run concurrently. The insights gained through the continuous discovery efforts of step 3 often identify the valuable Product Outcomes called out in step 2. Conversely, envisioned Product Outcomes (step 2) can drive new discovery efforts (step 3) aimed at validating the real value of those Product Outcomes.

Tips:

  • Target Product Outcomes must align to a key Business Objective

  • Must allow for multiple potential solutions

  • Limit the number of Product Outcomes to pursue, then pursue one at a time

  • Set metrics for the select outcomes by quarter

  • Initial target outcomes may focus only on learning to understand the problem space, then graduate to performance goals

Example:

(which we believe will drive a reduction in overall customer churn)

3. Identify Product Opportunities

(The customer needs, pains, and desires that are the gateways to achieving those improved product outcomes.)

These insights are the result of your continuous discovery activities.

Typically: Customer Difficulties; Complexities; Unsupported needs; Lack of understanding or communications; Missing capabilities; Time factors; Bottlenecks; Ease of use issues; Cost savings or efficiencies related to an outcome, etc.

Product opportunities can be expressed in descriptive customer language, e.g.: I can’t…; I want to… ; It’s difficult to…; I don’t understand…; I don’t know how to…; I have trouble with…; I don’t want…; I wish that I could…; etc.

Your Continuous Discovery efforts keep the flow of Opportunities coming.

Tips:

  • Conduct weekly discovery sessions with customers

  • Record each interaction recording insights, opportunities, quotes & relevant context

  • Use the Jobs-to-be-Done model to dissect needs, pains, desires

  • Use Experience Maps to represent customer experience & highlight opportunities

  • Deconstruct feature requests to determine underlying needs

  • Prioritize Opportunities according to their alignment with business objectives & relative value to customers

  • Represent Opportunities in hierarchies as needed (Parent, Child, Sibling)

Example:

4. Conduct Opportunity Experiments

(Set-up experiments that test the assumptions/ uncertainties of identified opportunities to discover which are the most valid and valuable opportunities to pursue)

Typically: Customer interviews; On-site or screen tracking observation; Focus groups; Data collection & analysis; Workflow mapping; Market research; Competitive research.

Example:

5. Conceive Solutions for target opportunities

(Ideate potential solutions for the prioritized Opportunities).

For the Opportunities that have been validated by opportunity experiments, apply standard ideation approaches to brainstorm a variety of solutions that address the target opportunity.

Example:

6. Conduct Solution Experiments

(Test solution alternatives to identify which will be most effective).

Solution experiments involve creating a concrete representation of proposed solutions that can be put in front of customers to gain realistic feedback, make improvements as needed, and determine with which alternative to proceed.

Typically: Pretotypes or other mock-ups; Storyboards; Concierge tests; Wizard of Oz tests; etc.

Example:

One Cautionary Note

Building Opportunity Solution Trees is thoughtful work that requires thoughtful, deliberate action. That can run contrary to the predisposition that many businesspeople have for quick action (including me.) So, doing an OST will seem to slow you down, and that’s bound to create some angst. But the delay serves to keep more of your decisions aligned to valuable business and customer goals, to assure that less time is spent in fevered debates over opinion, and to let you more effectively explain and defend your thinking to others. The deliberative process of shaping an OST shows its efficiency in the end results. This tree has strong roots.

Bill Haines