Prioritization Notes

The best ideas on how to make a decision on what to do next

Product Management

A product manager must consider all the various factors to grow the product: business goals, user requests, team resources. Deciding what's essential without a framework is long, painful, and improper. Use known models for effective decision making and roadmap building.

B2B SaaS Feature Prioritization
B2B SaaS Feature Prioritization

How we prioritize Ducalis features inside Ducalis.

Values (Impact drivers):

  • Activation: Improves conversion rate for new users to understand the product.
  • Delight: How will the product delight customers? How many users want it? Make it more convenient.
  • Retention: How significant are the reasons to return to the product?
  • Acquisition: Helps us get new visitors per month. Virality, SEO, Word of Mouth
  • Reach: How many users (not companies) will use that feature:
  • Upgrades: Will provide more new ARR  (new trials, plan upgrades, or expansion sales).

Efforts (cost of development):

  • Front Time: Time for front-end development
  • Back Time: Time for back-end development
  • UX: Difficulty of describing and designing UX/UI
DHM Model (by Netflix)
DHM Model (by Netflix)

Brainstorm ideas to answer:

Delight customers

How will the product delight (D) customers?
Both now and in the future.

Creating a hard to copy advantage

What will make the product hard (H) to copy?
Brand, Network effects, Economies of scale, Counter-positioning, Unique technology, Switching costs, Process power, Captured resource.

Margin-enhancing

What are the business model (M) experiments required to build a profitable business?

Enterprise Prioritization
Enterprise Prioritization

Deliver 5x more business impactful features by running a cross-functional prioritization.

Business drivers:

  • Key Results
  • CSAT
  • Opportunities
  • Legal

Software drivers:

  • Risks
  • Architecture
  • Refinements

Confidence criteria:

  • Research
  • Urgency

Efforts:

  • Back-end
  • Front-end
  • UX
Feature Buckets
Feature Buckets
Buckets to sort features:
  • Metrics Movers—Improve key product metrics, e.g. AARRR.
  • Customer Requests—Requested by customers. Carve out the roadmap.
  • Delights—Based on insights in design/technology customers would love.
  • Strategic—Aligned with business values and goals.
When too few features in a bucket:
  • Brainstorm features fitting the empty bucket.
  • Think how competitors would exceed your product.
  • Ask teams about features you’re not building.
When too many features in a bucket:
  • Think if it was created to get work on the roadmap.
  • Think if the buckets could be rolled up into fewer ones.
  • Check if the buckets are too granular.
HEART
HEART

Combined methods to define metrics reflecting UX quality and project goals.

HEART—UX metrics categories.
  • Happiness—user attitudes collected via survey.
  • Engagement—user involvement measured via behavioral proxies.
  • Adoption—the number of new users of a product/feature.
  • Retention—the rate of existing users’ return.
  • Task success—behavioral metrics of UX (efficiency, effectiveness, error rate).
Goals-Signals-Metrics—process transforming categories into metrics.
  • Goals—Identify the goals clearly.
  • Signals—Map goals to lower-level signals sensitive to changes in design.
  • Metrics—Refine signals into metrics to track or use in an A/B test.
ICE
ICE

Evaluate each task with 1—10 scale for prioritizing initiatives.

  • Impact: Moves a user across the AARRR funnel.
    Scored: 1—minimal impact; 10—massive impact.
  • Confidence: Conviction warranting the feature build out.
    Scored: 1—not sure it works; 10—must work out.
  • Ease: Work required for the feature delivery.
    Scored: 1—26 weeks+; 10—< 1 week.

Total Score = (Impact + Confidence + Ease) / 3

Kano Model
Kano Model

Product development model to classify customer preferences into five categories.

Poll two groups of customers:
  • Group 1—If they had the feature?
  • Group 2—If they didn’t have the feature?

Scored: Expect / Like / Neutral / Dislike

Results:
  • Expect + Dislike → Must-be
  • Like + Dislike → One-dimensional
  • Like + Neutral → Attractive
  • Neutral + Neutral → Indifferent
  • Dislike + Expect → Reverse
MoSCoW Method
MoSCoW Method

The simplest method to sort tasks with only one criteria. Mark each task with just one label:

  • Must—Critical to the current delivery timebox to be a success.
  • Should—Important but not necessary in the current delivery timebox.
  • Could—Desirable but not necessary. Could improve customer satisfaction.
  • Won’t—Agreed as the least-critical. Not planned.
PLG Hypothesis Prioritization
PLG Hypothesis Prioritization

Speed up your Product-Led Growth. This framework increases the chances of achieving product-market fit for the self-service type of customer experience by getting newly signed-up users closer to the AHA moment with fewer resources.

Flag criteria describe which part of the user's PLG journey they impact:

  • 1st Session
  • 1st Retention
  • 1st payment
  • Expansion

The audience criterion demonstrates how the hypothesis affects user segments:

  • Reach

Confidence criteria evaluate the problem validation level and potential solutions' reliability:

  • Problem validation
  • Solution confidence

Resource criteria help to estimate the resources needed for testing the hypothesis:

  • Front Time
  • Back Time
  • Design Time
  • Budget
RICE
RICE

Four-factor framework for prioritizing initiatives. 

  • Reach—How many customers will this project impact?
    Scored: Number of people/events per time period.
  • Impact—How much will this project increase conversion rate?
    Scored: 0.25—Minimal; 0.5—Low; 1—Medium; 2—High; 3— Massive.
  • Confidence—How much support do you have for your estimates?
    Scored: 20%—Moonshot; 50%—Low; 80%—Medium; 100%—High.
  • Effort—How much team time will the feature require?
    Scored: Number of “person-months”.

Total Score = Reach x Impact x Confidence / Effort.

Seagull Effect
Seagull Effect

Seagull—a person who rushes into a problem without ascertaining the facts, gives orders with formulaic advice, and rushes away instead of working alongside the team.

Results:
  • High staff turnover and low morale.
  • Increased chances for heart diseases among employees.
How to handle:
  1. Set clear expectations—explore what is required of the employee, how their performance will be evaluated, and agree to work towards the goals.
  2. Communicate consistently—observe what employees say and do, and speak openly with them about their work.
  3. Give powerful feedback—pay careful attention to each employee’s performance and praise as frequently as you give constructive feedback.
The North Star Method
The North Star Method
  1. Set a North Star metric—consolidate the work you’re doing and the value you’re delivering across acquisition, engagement, conversion, and retention.
  2. Define your user flows—define your app’s key events; draw the flows between events; use your analytics to identify the percentage of users taking each flow.
  3. Build a growth model—use the information about the user flows and are guided by the North Star metric to determine the growth drivers.
  4. Create a spreadsheet—transfer the model to a spreadsheet and evaluate your opportunities, to see how they impact growth.
WSJF
WSJF

Weighted Shortest Job First used to sequence jobs (eg., Features, Capabilities, and Epics) to produce maximum economic benefit.

WSJF = Cost of Delay / Job Size

Cost of Delay = User-business value + Time criticality + Risk reduction-opportunity enablement value

Scale each parameter with Fibonacci row (1, 2, 3, 5, 8, 13, 21).

Weighted Scoring Model
Weighted Scoring Model

Numerical scoring for prioritizing initiatives by multiple data layers.

Steps:
  1. List the initiatives under consideration.
  2. Devise a set of cost-vs-benefit criteria to score each initiative.
  3. Determine the weights of each criterion by their importance.
  4. Assign individual scores for each initiative by each criterion.
  5. Multiply each score by the criterion weight.
  6. Add up the results for each initiative.
  7. Rank initiatives by their total score.

How to choose a prioritization framework?

Not sure what to choose? Download our prioritization frameworks guide with questions, examples and useful links. Explore it in hi-res PNG, PDF or interactive Miro Board.