The Top 4 Sprint Metrics that Improve Developer Productivity
Four sprint metrics any engineering team can track to improve developer productivity and unlock better outcomes.
Neely Dunlap
Share
August 30, 2024
Sprint metrics and developer productivity
Sprints are short, typically two-week cycles in which development teams aim to complete a set amount of work. Engineering teams implement sprints to break down complex projects into manageable chunks, fostering regular delivery and continuous improvement.
At the end of a sprint, teams, line managers, product owners, TPMs, coaches, and other stakeholders review sprint metrics with the goal of continuous learning. In this article, we lay out the top four sprint metrics for improving developer productivity.
Sprint metrics answer key questions
Engineering teams measure and track many sprint metrics, several of which can be used to understand and improve developer productivity. Improving developer productivity involves addressing the inefficiencies and frictions that impact the development process, such as over-planning, context-switching, and navigating inefficient tools and workflows — all of which can undermine productivity and team satisfaction.
In retros, teams ask:
- Did we estimate our capacity correctly?
- Are we delivering well against our commitments?
- Are we working on the right things?
- What is the current morale of the team, and how may that be impacting our performance?
Four sprint metrics are best suited to answer these questions and uncover the improvements to unlock higher productivity and better outcomes.
1. Say/Do Ratio: How accurately are we estimating effort and capacity?
The say/do ratio is an essential sprint metric that measures how accurately teams estimate their capacity. Say/do ratio compares the number of story points committed at the start of a sprint (what they "say" they'll do) against what was completed by the end of it (what they "do").
The say/do ratio typically includes planned and unplanned work, so it measures how good the team is at estimating its capacity, but not whether it’s working on the right things (more on that later).
- A high say/do ratio indicates that a team has a good grasp of its capacity and efficiency. They thought they could deliver 100 story points, and they delivered close to 100 story points.
- A low say/do ratio indicates a team is not as good at planning to capacity. They thought they could deliver 100 story points, but only delivered 70. When this sprint metric is low, there is room to improve productivity. Teams should look into improving estimation accuracy, mitigating dependencies, and understanding the delays causing work to progress slower than anticipated.
Visualize say/do ratio for individual sprints and track trends over time
Since the say-do ratio measures how well a team follows through on what they say they will do, the healthiest say-do ratio is 1:1, meaning that for every commitment made, the team delivers. Development teams should strive to keep this ratio as high as possible, demonstrating they can accurately predict their capacity and balance committed work with necessary maintenance tasks and urgent requests.
It’s also beneficial to understand the reasons why committed work was not completed, so teams can anticipate and mitigate those risks in future sprints. Implementing improvements should see the say-do ratio increase, sprint over sprint.
Understand reasons why committed work was not completed to anticipate and mitigate risks in future sprints.
2. Planned/Unplanned Work Ratio: How are we managing our work?
As mentioned in the above sprint metric, during a sprint teams work on planned or unplanned work.
- Planned work includes work items that have been identified, prioritized, and agreed upon before the sprint starts, for example, new features, UX changes, fixing known bugs.
- Unplanned work comprises unexpected tasks that arise during the sprint, such as urgent bug fixes, unforeseen technical issues, or emergency requests from stakeholders. Some capacity should always be reserved for unplanned work.
Visualize unplanned work ratio for individual sprints and track trends over time
A healthy sprint typically has a planned/unplanned work ratio below 20%. This ratio indicates that the majority of the work completed is in line with the initial commitments, and the team is successfully balancing the unforeseen tasks.
As the 80/20 ratio begins to decrease in favor of unplanned work, it means the team is taking on higher amounts of unplanned work, an indicator of decreased productivity. They are delivering less of what they committed to the business. This suggests a need to reevaluate sprint planning, ensure alignment to business priorities, and potentially explore underlying issues, such as inadequate risk management, technical debt or poorly communicated cross-team dependencies.
Teams find it helpful to view a couple of these sprint metrics combined over a historical view of sprints. They like to see the say/do ratio and unplanned work trend juxtaposed with what shipped and what slipped each sprint. Examining the trend and the underlying tasks can help the team identify recurring patterns and issues they can address, and then the impact of those changes over time.
See a combined view of multiple sprint metrics and underlying tasks over a historical view of sprints to identify recurring patterns, address issues, and analyze impact over time.
3. Capacity Target Adherence: Are we working on the right things?
Another important sprint metric is capacity target adherence to ensure the team is working on the right things, aligned with strategic objectives. Capacity target adherence measures the distribution of effort across predefined categories of work.
Teams typically categorize work into multiple buckets, like strategic projects, technical debt, bug fixes, and KTLO. It’s good practice to have a reference ideal target in mind each sprint, e.g. 60:20:10:10. When it comes to the question of developer productivity, achieving the targeted distribution is an indication of high productivity, because the developers are advancing the strategic goals of the company.
Track capacity target adherence across sprints to evaluate if efforts are in line with strategic targets.
By tracking this sprint metric and reflecting on how much time was actually spent on each type of work, teams can evaluate whether their efforts are in line with strategic targets. If there is a significant discrepancy, the team should raise the issue with a leader or stakeholder to discuss the reasons and reassess priorities.
4. DevEx to DevProd Correlation: What insights can we derive from comparing survey responses to sprint outcomes?
For a deeper understanding of how developer experience correlates to sprint performance, engineering leaders learn a lot from juxtaposing the above three sprint metrics with developer survey data. Blending quantitative productivity measures and qualitative feedback creates a holistic view of team performance and well-being, offering deeper insights into the factors influencing developer efficiency, satisfaction, and engagement.
If a team conducts quick surveys at the end of every sprint, you can look at these correlations every sprint. If you only do developer surveys once a quarter, then take a quarterly view.
Compare team survey responses to output, and see if any unusual findings should be investigated further.
Senior leaders and domain leaders benefit from looking at these correlations between sprint metrics and developer surveys for the entire organization, for sub-orgs, and for individual teams. This helps identify systemic issues vs. team-specific challenges and prioritize continuous improvement priorities.
Analyzing results across teams illuminates thriving teams, whose best practices can be shared with others to improve outcomes.
Equipped with these enriched insights, engineering leaders can make more timely and informed decisions to enhance overall developer experience and team morale, target process refinements more precisely, and better assess the impact of changes on developer productivity.
Ready to supercharge your sprint metrics?
Elad Kochavi, an engineering team leader at Riskified, runs his retros with sprint metrics from Faros AI. According to Elad, “We now have a combined picture for all the tools we use and can do much more sophisticated analysis in place of the naive and simplified views in Jira. Our transition to data-driven retros has energized and motivated the team. They love seeing the impact of their efforts in the charts.”
Project management tools like Jira can only take your sprint metrics so far. Faros AI takes engineering data visualization to the next level with dashboards that provide a full, context-rich picture across all your teams’ sprints:
- A combined view of human- and machine-curated data from Jira, source control, CI, CD, code analysis, testing, defects, and incidents)
- Hierarchical drilldowns based on org structure, product groups, teams, apps, or services
- Unlimited history
- Velocity, throughput, quality, reliability, and predictability metrics
- Team-tailored AI insights and recommendations
Reach out to our experts for more details on how our advanced sprint metrics displayed on customizable dashboards can provide your teams with deeper insights.
More articles for you
AI generates 25% of Google’s new code. Other organizations seek similar insights to mitigate the risks of this new age of AI-driven development.
Editor's pick
Is the Build Time metric the right measure to demonstrate the ROI of Developer Productivity investments? Does it stand up in court? We found out.
Editor's pick
A guide to measuring continuous integration metrics, such as CI Speed and CI Reliability, and an introduction to the most important developer productivity metric you never knew existed.
See what Faros AI can do for you!
Global enterprises trust Faros AI to accelerate their engineering operations.
Give us 30 minutes of your time and see it for yourself.