Prioritization Notes

The best ideas on how to make a decision on what to do next

Frameworks

Prioritization frameworks are ready-made models that help you translate opinions into clear and precise criteria. Choose one that meets your team's goals, or develop your own criteria based on several of them.

AARRR
AARRR

Metrics for optimizing funnel of product growth.

  • Acquisition—arrives from marketing channels telling about the product.
  • Activation—visitors converted into active users (e.g. passes onboarding).
  • Retention—users returning to use the product.
  • Referral—people spreading the word about the product.
  • Revenue—convert active users into paying customers.
AARRR Prioritization
AARRR Prioritization

What to do first for optimizing product growth funnel.

  1. Transform AARRR steps into scoring criteria for Weighted Scoring Matrix.
  2. Decide on criteria weights to focus on specific goals more. E.g. Retention is more important now.
  3. Decide on the score scale. E.g. estimates in affected users or just scores from 1 to 5.
  4. Add Effort Criterion with negative weight to consider development.
  5. Prioritize all product related tasks with scores for all tasks by each AARRR step.

AARRR score = A(Score x Weight) + A(S x W) + R(S x W) + R(S x W) + R(S x W) - Effort (score x weight)

B2B SaaS Feature Prioritization
B2B SaaS Feature Prioritization

How we prioritize Ducalis features inside Ducalis.

Values (Impact drivers):

  • Activation: Improves conversion rate for new users to understand the product.
  • Delight: How will the product delight customers? How many users want it? Make it more convenient.
  • Retention: How significant are the reasons to return to the product?
  • Acquisition: Helps us get new visitors per month. Virality, SEO, Word of Mouth
  • Reach: How many users (not companies) will use that feature:
  • Upgrades: Will provide more new ARR  (new trials, plan upgrades, or expansion sales).

Efforts (cost of development):

  • Front Time: Time for front-end development
  • Back Time: Time for back-end development
  • UX: Difficulty of describing and designing UX/UI
Content Idea Prioritization
Content Idea Prioritization

Produce content that gain you more qualified visitors.

Describe content ideas sufficiently:
  1. Brief idea, draft title, article message, supporting materials.
  2. Research keywords: Search Volume, Keyword Difficulty, Search Volume. Use Ahrefs or SEMrush, etc.
  3. Link to similar article for SEO reference.
Evaluate with Weighted Scoring:
  1. Competence. Do we have enough experience, materials, and data to create quality content?
  2. Time. How much time will it take?
  3. Search Volume. Estimated Search Volume.
  4. Difficulty. Average keyword difficulty.
  5. Frequency. How often do customers mention the topic?
DHM Model (by Netflix)
DHM Model (by Netflix)

Brainstorm ideas to answer:

Delight customers

How will the product delight (D) customers?
Both now and in the future.

Creating a hard to copy advantage

What will make the product hard (H) to copy?
Brand, Network effects, Economies of scale, Counter-positioning, Unique technology, Switching costs, Process power, Captured resource.

Margin-enhancing

What are the business model (M) experiments required to build a profitable business?

Eisenhower Matrix
Eisenhower Matrix

2x2 matrix for personal time management.

Make two separate assessments of each task:
  • Is it urgent or not urgent?
  • Is it important or not important?
Results:
  • Urgent + Important → Do First.
  • Not Urgent + Important → Schedule.
  • Urgent + Not Important → Delegate to others.
  • Not Urgent + Not Important → Avoid doing.
Enterprise Prioritization
Enterprise Prioritization

Deliver 5x more business impactful features by running a cross-functional prioritization.

Business drivers:

  • Key Results
  • CSAT
  • Opportunities
  • Legal

Software drivers:

  • Risks
  • Architecture
  • Refinements

Confidence criteria:

  • Research
  • Urgency

Efforts:

  • Back-end
  • Front-end
  • UX
Feature Buckets
Feature Buckets
Buckets to sort features:
  • Metrics Movers—Improve key product metrics, e.g. AARRR.
  • Customer Requests—Requested by customers. Carve out the roadmap.
  • Delights—Based on insights in design/technology customers would love.
  • Strategic—Aligned with business values and goals.
When too few features in a bucket:
  • Brainstorm features fitting the empty bucket.
  • Think how competitors would exceed your product.
  • Ask teams about features you’re not building.
When too many features in a bucket:
  • Think if it was created to get work on the roadmap.
  • Think if the buckets could be rolled up into fewer ones.
  • Check if the buckets are too granular.
HEART
HEART

Combined methods to define metrics reflecting UX quality and project goals.

HEART—UX metrics categories.
  • Happiness—user attitudes collected via survey.
  • Engagement—user involvement measured via behavioral proxies.
  • Adoption—the number of new users of a product/feature.
  • Retention—the rate of existing users’ return.
  • Task success—behavioral metrics of UX (efficiency, effectiveness, error rate).
Goals-Signals-Metrics—process transforming categories into metrics.
  • Goals—Identify the goals clearly.
  • Signals—Map goals to lower-level signals sensitive to changes in design.
  • Metrics—Refine signals into metrics to track or use in an A/B test.
ICE
ICE

Evaluate each task with 1—10 scale for prioritizing initiatives.

  • Impact: Moves a user across the AARRR funnel.
    Scored: 1—minimal impact; 10—massive impact.
  • Confidence: Conviction warranting the feature build out.
    Scored: 1—not sure it works; 10—must work out.
  • Ease: Work required for the feature delivery.
    Scored: 1—26 weeks+; 10—< 1 week.

Total Score = (Impact + Confidence + Ease) / 3

Kano Model
Kano Model

Product development model to classify customer preferences into five categories.

Poll two groups of customers:
  • Group 1—If they had the feature?
  • Group 2—If they didn’t have the feature?

Scored: Expect / Like / Neutral / Dislike

Results:
  • Expect + Dislike → Must-be
  • Like + Dislike → One-dimensional
  • Like + Neutral → Attractive
  • Neutral + Neutral → Indifferent
  • Dislike + Expect → Reverse
MoSCoW Method
MoSCoW Method

The simplest method to sort tasks with only one criteria. Mark each task with just one label:

  • Must—Critical to the current delivery timebox to be a success.
  • Should—Important but not necessary in the current delivery timebox.
  • Could—Desirable but not necessary. Could improve customer satisfaction.
  • Won’t—Agreed as the least-critical. Not planned.
PLG Hypothesis Prioritization
PLG Hypothesis Prioritization

Speed up your Product-Led Growth. This framework increases the chances of achieving product-market fit for the self-service type of customer experience by getting newly signed-up users closer to the AHA moment with fewer resources.

Flag criteria describe which part of the user's PLG journey they impact:

  • 1st Session
  • 1st Retention
  • 1st payment
  • Expansion

The audience criterion demonstrates how the hypothesis affects user segments:

  • Reach

Confidence criteria evaluate the problem validation level and potential solutions' reliability:

  • Problem validation
  • Solution confidence

Resource criteria help to estimate the resources needed for testing the hypothesis:

  • Front Time
  • Back Time
  • Design Time
  • Budget
REAN
REAN

Plan and analyze the complex sequence of inter-related multichannel marketing activities.

Map each marketing channel activity by:
  • Reach—effectiveness of the attraction to your site.
  • Engage—customers or prospects interaction with your brand.
  • Activate—actions customers take on your website.
  • Nurture—encouragement to return to your site and consume more content.
RICE
RICE

Four-factor framework for prioritizing initiatives. 

  • Reach—How many customers will this project impact?
    Scored: Number of people/events per time period.
  • Impact—How much will this project increase conversion rate?
    Scored: 0.25—Minimal; 0.5—Low; 1—Medium; 2—High; 3— Massive.
  • Confidence—How much support do you have for your estimates?
    Scored: 20%—Moonshot; 50%—Low; 80%—Medium; 100%—High.
  • Effort—How much team time will the feature require?
    Scored: Number of “person-months”.

Total Score = Reach x Impact x Confidence / Effort.

Technical Debt Prioritization
Technical Debt Prioritization

Minimizes future risks and avoids slowing down the development of your software.

  • Code Knowledge. How are you familiar with the code?
  • Severity. How it affects the software's functionality or performance?
  • Dependency and Scale. How many components depend on that part of the code? The scale of impacted software architecture.
  • Cost of Fixing. How many story points would it cost to fix the technical debt issue?

Total Score = (KnowledgeSeverity + Dependency) – 3 * Cost

The North Star Method
The North Star Method
  1. Set a North Star metric—consolidate the work you’re doing and the value you’re delivering across acquisition, engagement, conversion, and retention.
  2. Define your user flows—define your app’s key events; draw the flows between events; use your analytics to identify the percentage of users taking each flow.
  3. Build a growth model—use the information about the user flows and are guided by the North Star metric to determine the growth drivers.
  4. Create a spreadsheet—transfer the model to a spreadsheet and evaluate your opportunities, to see how they impact growth.
Value vs. Complexity / Effort Matrix
Value vs. Complexity / Effort Matrix

2x2 matrix for prioritizing initiatives.

Make two separate assessments of each initiative:
  • How much value will the initiative deliver?
    Scored: High Value / Low Value
  • How much effort will the implementation require?
    Scored: High Complexity / Low Complexity
Results:
  • High Value + Low Complexity → Quick Wins.
  • High Value + High Complexity → Big Bets.
  • Low Value + Low Complexity → Maybes.
  • Low Value + High Complexity → Time Sinks.
WSJF
WSJF

Weighted Shortest Job First used to sequence jobs (eg., Features, Capabilities, and Epics) to produce maximum economic benefit.

WSJF = Cost of Delay / Job Size

Cost of Delay = User-business value + Time criticality + Risk reduction-opportunity enablement value

Scale each parameter with Fibonacci row (1, 2, 3, 5, 8, 13, 21).

Weighted Scoring Model
Weighted Scoring Model

Numerical scoring for prioritizing initiatives by multiple data layers.

Steps:
  1. List the initiatives under consideration.
  2. Devise a set of cost-vs-benefit criteria to score each initiative.
  3. Determine the weights of each criterion by their importance.
  4. Assign individual scores for each initiative by each criterion.
  5. Multiply each score by the criterion weight.
  6. Add up the results for each initiative.
  7. Rank initiatives by their total score.

How to choose a prioritization framework?

Not sure what to choose? Download our prioritization frameworks guide with questions, examples and useful links. Explore it in hi-res PNG, PDF or interactive Miro Board.