- stands for Reach, Impact, Confidence, Effort
- is used to evaluate project ideas and features
- finds a balance between values and efforts
- considers the impact on a single objective only
Reach ranks your ideas by the number of leads and users it will affect. If it’s a registration page, it will touch upon every potential customer. If it’s an in-depth tweaking, probably only loyal users will notice that.
- Answers the question: How many people will this feature affect within a defined time period?
- Originally measured: Number of people/events per time period (any number).
Impact ranks your ideas by the amount of influence on the objective. You have to identify one main objective to aim for a definite goal. If you score Impact like “Idea A increases conversion rate—2,” “Idea B increases adoption—3,” and “Idea C maximizes delight—2,” such scores make no sense.
- Answers the question: How much will this feature impact the objective when a customer encounters it?
- Originally measured: 0.25—Minimal; 0.5—Low; 1—Medium; 2—High; 3— Massive.
Confidence supports or skepticizes your estimates. You are confident only when you have back up data. Confidence scores make the evaluation more data-driven and less emotional.
- Answers the question: How confident are you in your reach and impact estimates?
- Originally measured: 20%—Moonshot; 50%—Low Confidence; 80%—Medium Confidence; 100%—High Confidence.
Effort ranks your ideas by the amount of time their implementation requires. It completes the prioritization with the Value/Effort balance and helps you surface the Easy Wins.
- Answers the question: How much time will the feature require from the whole team: product, design, and engineering?
- Originally measured: Number of “person-months” (any number).
To get the score:
- Multiply Reach, Impact, Confidence
- Divide the product by Effort
The result shows how the job influences the objective per time worked. Thus, you can focus on significant tasks, understanding whom you will impact, why, how, and how soon.
RICE is best for estimating the value of projects, features, user stories, ideas, and hypotheses.
Typically, PMs prioritize on their own and tell the team what their tasks for the next sprint are. Such approach is:
The PM must gather all the information. They need to spend days downloading and analyzing data from all the team’s tools and services.
That information is mostly not enough. The PM needs to distract the team to get some additional information and their opinions. In the end, the PM needs to wake up the gut feeling to make guesses anyway.
Data-driven or not, the PM makes all the decisions thyself and forces the conclusions on others. The team feels disregarded and gets unmotivated over time.
Plus, the PM feels overly responsible for the decisions and gets stressed out.
How to Fix
Involving your team solves all the problems. Together you:
- estimate quickly and accurately;
- destroy silos and build shared understanding.
1. Divide the criteria
Who is best at evaluating each of the four criteria? Sales and Support are probably the best to estimate Reach; Products—Impact; Engineers and Designers—Effort. Confidence should be evaluated by everybody. Collect all opinions, regardless of a newbie or an expert—their average score is the most accurate estimation you ever get.
2. Evaluate asynchronously
Average estimation is precise when people don’t deliberate. They must preserve their own unique vision and not emulate each other’s thoughts. Teams mustn’t discuss possible scores before they’ve assigned them. Estimate independently, like in planning poker, but out of the meeting room and when it’s convenient.
- Saved time on gathering and analyzing tons of information;
- The team has a full context on every idea and issue;
- They know their goal precisely;
- They’ve taken part in deciding how to achieve the goal;
- Got an accurate list of priorities;
- Eliminated one unnecessary meeting.
When it Works
RICE is great:
- When you start prioritizing—It saves you a whole load of time on thinking up sensible criteria and enables swift decision-making.
- When you need a blazing focus—The Impact criterion makes you think of a single objective, so ideas not affecting it won’t get to your TOP.
Don’t complicate it. Prioritization must be quick. Use boards in Ducalis—they all have RICE by default. Import tasks from a spreadsheet or a task tracker, and they’re ready for evaluation.
If after a few cycles of prioritization you:
- interpret criteria unevenly from idea to idea;
- want to change the description and scores;
- notice you ignore other important objectives;
- want to develop the product’s parts equally.
Then it’s time to change or add custom criteria.
Don’t overthink or try to brainstorm all criteria at once. Change them gradually, each time the idea pops up. It always happens during the evaluation. Open the criteria settings and change, add, or delete whatever is needed.
The less time you spend on setting up the evaluation process and evaluation itself, the more time you can devote to building a shared understanding and bringing core values to your customers.
Try Ducalis out for yourself—it’s forever free. When you see that it works for you—invite your team.
- RICE is for prioritizing projects, features, user stories, ideas, and hypotheses.
- RICE is best in the beginning when you don’t know where to start.
- Prioritization must be teamwork—don’t estimate team jobs alone.
- Prioritization must be a time-saver—use tools and templates to accelerate the process.
- Your prioritization mechanism must evolve together with the product.