The RICE Score Model for Prioritizing Product Requirements
🔥 Put yourself into the shoes of a Product Manager. You've been looking into various reports and data related to your product's active user base. You notice an increase in customer drop-off over the past 6 weeks. Alarmed, you start looking for reasons and ideas to halt this trend. You decided to reach out to multiple stakeholders and ask for suggestions, including customers and colleagues from Design, Engineering, Sales, Marketing, etc. You obtained a list of many ideas on how to tackle the issue. So far so good. However, given the resource constraints, you can only start working on one or a few ideas at the time. You are now facing a key product management issue: How to prioritize one requirement over another? How to decide which requirement has to be taken up first, and which ones later?
Prioritization of Product Improvements
Given the typically long list of ideas on improving their product, Product Managers have to decide how to prioritize this list. This involves comparing multiple requirements from various stakeholders. Some requirements could be genuine needs, while others could be desires or wants. A data-centric framework is helpful to properly compare all ideas/requirements. The use of accurate data can help to minimize biases and guesswork in the decision-making process.
The RICE Prioritization Model
This prioritization framework was first introduced by Intercom, a US-based communications company. The framework uses 4 aspects and corresponding metrics to compare different (product) requirements – Reach, Impact, Confidence, and Efforts. Metrics are used to calculate a final score. Any given set of requirements can be prioritized using the final RICE Prioritization Score.
To tackle the earlier customer drop-off issue, let's consider 4 solutions - A, B, C, and D. We'll try to calculate the RICE Prioritization score for these ideas to prioritize them.
- R - Reach: Used for measuring the number of people that would be impacted (benefitted) by the new feature in a given timeframe. Alternatively, the number of 'transactions' or 'sales' could also be considered.
For example, solution A benefits 100 customers a day, while solution B benefits 200 customers a day. Hence the Reach for A and B is 3000 and 6000 customers per month respectively.
- I - Impact: This metric aims to quantify the how strongly a feature would benefit the users. Or how much does the feature help to achieve a goal (e.g. reduce customer drop-off). This metric helps to maintain focus on really impactful features, rather than simply innovative features, or even one's favourite ideas. Given the difficulty in correctly measuring impact of a feature, Intercom classified Impact into 5 level – Massive (3), High (2), Medium (1), Low (0.5), and Minimal (0.25) impact.
- C - Confidence: It is advisable to use properly calculated values for the RICE model. But some of the values could only be good estimates. In such scenarios, the Confidence level (expressed in a percentage) is used to cover the inaccuracies in the data used. Intercom highlighted 3 confidence levels – High (100%), Medium (80%), and Low (50%). Any requirement with < 50% confidence should be reconsidered.
For example, if you are pretty confident on the figures Reach & Impact for Solution D, but can only produce a fairly good estimate on its Efforts, then you could assume an 80% confidence level for Solution D.
- E - Efforts: While Reach-Impact-Confidence together comprise the Benefits of a feature, Efforts would represent the Costs in delivering the feature. RICE Prioritization is actually a form of Cost-Benefit Analysis of multiple requirements. To determine the value for Efforts, the total time required by all relevant stakeholders (Product, Design, Engineering) for deploying the feature should be considered. The metric can be expressed in the form of 'person months' (anything requiring less than a month could be considered as 0.5 person-month)
For example, Product & Design teams gave an estimate of 4 days for the Solution C, while the Engineering team gave an estimate of 7 days - the Efforts would be 0.5 person-months.
Calculating the RICE Score. Formula
The RICE Score can be calculated using the formula below. Note that the higher the RICE score, the higher should be the priority for the requirement.

Here's a sample RICE prioritization for the proposed solutions on the customer drop-off issue:
PROS of the RICE Framework. Benefits
- Helps to make calculated decisions while prioritizing features / requirements.
- A common scoring model helps in deciding between features that may be difficult to compare otherwise.
CONS of the RICE Framework. Pitfalls
- It doesn't capture the dependencies in the deployment of 2 or more features.
- Assumes that each user/customer would provide the same incremental revenue when a new feature/service is deployed (this may not be true, in a B2B environment some customers may be much more important than others)
⇒ Please share any ideas or experiences you have with prioritizing product improvements. Thank you.
Sources:
McBride, S. (n.a.) "RICE: Simple prioritization for product management", Inside Intercom
Merryweather, E. (2020) "3 Prioritization Techniques All Product Managers Should Know", Product School
Roadmunk (n.a.) "RICE Score: A prioritization framework for estimating the value of ideas", Roadmunk
ProductPlan (n.a.) "RICE Scoring Model", ProductPlan
X
Welcome to the Product Management forum. The topic being discussed here is: "The RICE Score Model for Prioritizing Product Requirements".
Get access now. Completely free.
Log in
|
|
|
4 |
|
Anonymous
|
|
Product Prioritization Methods
(Besides ICE and RICE) 3 other product prioritization techniques are:
- MoSCoW Analysis: Must have,... Sign up
|
|
|
|
More on Product Management:
|
|

|