Grafana Labs logo
Search icon
A better way to prioritize feature backlogs: the CERB scoring method

A better way to prioritize feature backlogs: the CERB scoring method

2026-01-097 min
Twitter
Facebook
LinkedIn

When you're on a software team, planning for the weeks and months to come is always a challenge. You have to balance deep feature backlogs, business and leadership requests, customer requests, and operational interruptions. Effective planning requires a way to prioritize the backlog, set realistic roadmap goals, and justify decisions.

There are several ways to approach this problem semi-objectively, including RICE, the method our team used previously. The RICE method has four metrics to quantify feature priority: reach, impact, confidence, and effort. Those metrics are then applied to the following equation to generate a RICE score: (reach * impact * confidence) / effort. Projects with higher RICE scores should have higher priority in a team’s backlog.

My team builds Grafana Cloud’s SLO and Service Center products. While the RICE method quantifies the prioritization process, our team encountered several challenges with this method that we wished to improve upon.

That's why we created a revised and simplified scoring method we call CERB (customer impact, excitement, relative size, business impact). CERB helps teams identify the most valuable work and justify prioritization decisions with a quantifiable metric. This post details the challenges we wished to address with prioritization and describes how the CERB scoring model works.

Prioritization challenges

The most obvious issue we found with RICE scoring is that it wasn't accurate for our team. After evaluating several quarters worth of historical planning scores and roadmaps, it was clear that projects with top RICE scores were consistently de-prioritized as roadmap items. This indicates that the metrics represented by RICE were different from our properties.

I suspect that one of the culprits for scoring misalignment with RICE is the difficulty of estimating each component of the score:

  • The reach metric may have objective measurement methods for mature products and features, but it's harder to define for greenfield work. 
  • Impact is perhaps the most straightforward of the metrics, and it's carried over into our own scoring method. 
  • Confidence is notoriously difficult to estimate, especially for new features where the product-market fit is still being explored. 
  • Effort is often measured by “T-shirt sizes” in agile environments, but RICE calls for person-months, which is significantly harder to estimate.

A second issue is intuitively understanding how the RICE metrics impact the final score. While (reach * impact * confidence) / effort is a simple equation, it is not immediately obvious how each metric impacts the final score. It's difficult to understand the weighting of each metric since they use different units or scales. We sought a simpler, equally weighted metric for scoring projects.

Introducing CERB

We developed the CERB scoring model to better emphasize the values our team appreciates and to make the scoring process simpler and more intuitive. Each metric is scored on a scale of 1-5 and they are equally weighted.

Customer impact

The customer impact represents how relevant this project is to our customers.

  • How many customers are asking for this feature?
  • How much value does this add for customers?
  • Do competitors have this feature?
  • Will the presence of this feature help win deals?
  • Will the absence of this feature cause us to lose deals?

Excitement

Grafana Labs is an engineering-led company and we want to build features our teams are excited about. Our most successful features often start as hackathon ideas or open source contributions. This category represents the level of enthusiasm the team has for working on the feature.

  • 1 = I don't want to work on this
  • 2 = Not my jam, but I’ll work on this if it is a team priority
  • 3 = I'd like to work on this
  • 4 = I’d love to work on this
  • 5 = This is the project I’d most like to work on

Relative size

The relative size of the work required. The scale described here is the “T-shirt” sizing our team uses, but you can use any 1-5 scale you’d like. Grafana Labs emphasizes incremental delivery, so lower numbers are better for this one.

  • 1 < 1 month
  • 2 = 1 month
  • 3 = 2 months
  • 4 = 3 months (1 quarter)
  • 5 = more than 3 months (1 quarter)

Business impact

The business impact represents how relevant this project is to Grafana Labs.

  • Is this project aligned with organization or company initiatives?
  • Is this project a leadership/executive request?
  • Does this project support a cross-team initiative?
  • Does this project have a quantifiable impact on spending or cost? 
  • Does this project have a quantifiable impact on revenue?

Scoring

A spreadsheet with scores for various projects using the CERB method

CERB scores are created either by assigning each metric to a specific team member or by taking an average of the team’s scores. Our team assigns the customer impact score to our product manager, the business impact score is assigned to the engineering manager, and the average score across the engineers is taken for the excitement and relative size scores.

The scores are equally weighted and simply added up. Higher scores are better/higher priority. A complication that needs to be accounted for before summing: the relative size score should be inverted (6 - score) to give higher priority to smaller projects.

Example: 

  • Customer impact: 3
  • Excitement: 4
  • Relative size: 2
  • Business impact: 5
  • Score: 3 + 4 + (6 - 2) + 5 = 16

Putting it all together

Our team started working with the CERB scoring model in 2025 and has been using it for continuous planning for the past two quarters. It’s easy to score, the team likes it, and so far the project scores are aligning very well with the projects we prioritize on our roadmap. In fact, for the last two quarters the team’s roadmap prioritization matched our CERB scoring and completed the prioritized roadmap items. The lightweight relative sizing score seems sufficient to provide a good estimate of how much the team will be able to complete in a quarter.

As I mentioned before, engineering teams at Grafana Labs possess a large amount of autonomy to decide the projects they want to work on and how those projects will be implemented, and we see CERB scores as a semi-objective metric for justifying those choices. We hope that the CERB score approach will correlate with successful outcomes, delivering innovative new features to Grafana Labs customers.

Testing CERB with your team

Again, we've been really pleased with the results, but is the CERB scoring model the right fit for your team? In our experience, it works  best for engineering teams with:

  • A high degree of autonomy
  • A culture that values incremental delivery
  • A desire to balance business needs with product innovation

CERB is a good starting point if your team is struggling with quantifying their prioritization decisions, or if your prioritized work continually fails to align with your roadmap. You can aksi try swapping equally weighted metrics to better align with your team’s values if they lean in a different direction than the scoring system defined here. For example, your team may care more about innovation than excitement. In this case your team can swap the "excitement" score for an "innovation" metric and turn it into CRIB to get a personalized scoring system for your team.

To successfully get this approach off the ground and secure engineering buy-in, our advice is to first run CERB in parallel with your existing method and see which one better aligns with your prioritized backlog. This allows you to collect historical data that demonstrates how the new CERB scores match with the projects your team actually delivers. 

To keep the process lightweight, assign each metric's scoring responsibility to the most relevant team member. For us, the product manager owns "customer impact" and the engineering manager owns "business Impact," while the engineers own the "excitement" and "relative size" scores. Presenting leadership with a transparent, quantifiable metric that justifies team priorities and correlates with successful delivery will be your strongest argument for making the permanent switch.

Grafana Cloud is the easiest way to get started with metrics, logs, traces, dashboards, and more. We have a generous forever-free tier and plans for every use case. Sign up for free now!

Tags

Related content