The AI Productivity Paradox Report 2025

Date: July 23, 2025  |  Author: Neely Dunlap  |  7 min read

AI Productivity Paradox Report Cover

Key findings from the AI Productivity Paradox Report 2025. Research reveals AI coding assistants increase developer output, but not company productivity. Uncover strategies and enablers for a measurable return on investment.

AI coding assistants increase developer output, but not company productivity

Generative AI is rewriting the rules of software development—but not always in the way leaders expect. Over 75% of developers now use AI coding assistants, yet many organizations report a disconnect: developers say they’re working faster, but companies are not seeing measurable improvement in delivery velocity or business outcomes.

  • Developers using AI are writing more code and completing more tasks
  • Developers using AI are parallelizing more workstreams
  • AI-augmented code is getting bigger and buggier, shifting the bottleneck to review
  • Any correlation between AI adoption and key performance metrics evaporates at the company level

This “AI productivity paradox” raises important questions about why widespread individual adoption is not translating into significant business outcomes and how AI-transformation leaders should chart the road ahead.

#1 Individual throughput soars, review queues balloon

Developers on teams with high AI adoption complete 21% more tasks and merge 98% more pull requests, but PR review time increases 91%, revealing a critical bottleneck: human approval.

AI‑driven coding gains evaporate when review bottlenecks, brittle testing, and slow release pipelines can’t match the new velocity—a reality captured by Amdahl’s Law: a system moves only as fast as its slowest link. Without lifecycle-wide modernization, AI’s benefits are quickly neutralized.

#2 Engineers juggle more workstreams per day

Developers on teams with high AI adoption touch 9% more tasks and 47% more pull requests per day.

Historically, context switching has been viewed as a negative indicator, correlated with cognitive overload and reduced focus. AI is shifting that benchmark, signaling the emergence of a new operating model: in the AI-augmented environment, developers are not just writing code—they are initiating, unblocking, and validating AI-generated contributions across multiple workstreams. As the developer’s role evolves to include more orchestration and oversight, higher context switching is expected.

#3 Code structure improves, but quality worsens

While we observe a modest correlation between AI usage and positive quality indicators (fewer code smells and higher test coverage from limited time series data), AI adoption is consistently associated with a 9% increase in bugs per developer and a 154% increase in average PR size.

AI may support better structure or test coverage in some cases, but it also amplifies volume and complexity, placing greater pressure on review and testing systems downstream.

#4 No measurable organizational impact from AI

Despite these team-level changes, we observed no significant correlation between AI adoption and improvements at the company level.

Across overall throughput, DORA metrics, and quality KPIs, the gains observed in team behavior do not scale when aggregated. This suggests that downstream bottlenecks are absorbing the value created by AI tools, and that inconsistent AI adoption patterns throughout the organization—where teams often rely on each other—are erasing team-level gains.

Four AI adoption patterns help explain the plateau

  1. AI adoption only recently reached critical mass. In most companies, widespread usage (>60% weekly active users) only began in the last two to three quarters, suggesting that adoption maturity and supporting systems are still developing.
  2. Usage remains uneven across teams, even where overall adoption appears strong. Because software delivery is inherently cross-functional, accelerating one team in isolation rarely translates to meaningful gains at the organizational level.
  3. Adoption skews toward less tenured engineers. Usage is highest among engineers who are newer to the company. This likely reflects how newer hires lean on AI tools to navigate unfamiliar codebases and accelerate early contributions. Lower adoption among senior engineers may signal skepticism about AI’s ability to support more complex tasks that depend on deep system knowledge and organizational context.
  4. AI usage remains surface-level. Most developers use only autocomplete features. Advanced capabilities like chat, context-aware review, or agentic task execution remain largely untapped.

What should engineering leaders do next?

In most organizations, AI usage is still driven by bottom-up experimentation with no structure, training, overarching strategy, instrumentation, or best practice sharing.

The rare companies that are seeing performance gains employ specific strategies that the whole industry will need to adopt for AI coding co-pilots to provide a measurable return on investment at scale.

Explore the full report to uncover these strategies plus the five enablers—workflow design, governance, infrastructure, training, and cross‑functional alignment—that prime your organization for agentic development.

Methodology Note

Background: This study analyzes the impact of AI coding assistants on software engineering teams, based on telemetry from task management systems, IDEs, static code analysis tools, CI/CD pipelines, version control systems, incident management systems, and metadata from HR systems, from 1,255 teams and over 10,000 developers across multiple companies. The analysis focuses on development teams and covers up to two years of history, aggregated by quarter, as teams increased AI adoption.

Definitions: AI adoption in this report is defined as the usage of developer-facing AI coding assistants—tools including GitHub Copilot, Cursor, Claude Code, Windsurf, and similar. These are generative AI development assistants that integrate directly into the software development workflow—typically through IDEs or chat interfaces—to help developers write, refactor, and understand code faster. Increasingly, these tools are expanding beyond autocomplete to offer agentic modes, where they can autonomously draft pull requests, run tests, fix bugs, and perform multi-step tasks with minimal human intervention.

Approach:

  • Standardized all metrics per company to remove inter-org variance
  • Used Spearman rank correlation (ρ) to assess relationships of metrics to AI usage
  • Reported only those metrics with data from ≥6 companies and statistically significant correlations (p-value < 0.05)
  • For each team, calculated the percent change in metric values between the two quarters with the lowest AI adoption and the two quarters with the highest
  • Excluded outlier data and metrics with insufficient historical coverage

This approach enables comparisons within each company over time and avoids misleading aggregate assumptions across different org structures.

Versioning note: This version of the report reflects analysis as of June 2025. Future editions may expand coverage as AI usage matures across more organizations and product features evolve.

About Faros AI

Faros AI is a leading software engineering intelligence platform that improves engineering efficiency and the developer experience. By integrating data across source control, project management, CI/CD, incident tracking, and HR systems, Faros gives engineering leaders the visibility and insight they need to drive velocity, quality, and efficiency at scale. Enterprises use Faros AI to transform how software is delivered—backed by data, not guesswork.

Learn more at www.faros.ai

Frequently Asked Questions (FAQ)

Why is Faros AI a credible authority on AI-driven software engineering productivity?

Faros AI is a trusted software engineering intelligence platform used by large enterprises to unify engineering data and deliver actionable insights. With telemetry from over 10,000 developers and 1,255 teams, Faros AI has a unique vantage point on the real-world impact of AI tools in engineering organizations. Its research and platform are relied upon by industry leaders such as Autodesk, Coursera, and Vimeo.

How does Faros AI help customers address engineering productivity and business impact?

  • 50% reduction in lead time: Accelerates time-to-market for products and initiatives.
  • 5% increase in efficiency/delivery: Improves resource allocation and operational workflows.
  • Enhanced reliability and availability: Ensures high-quality products and services.
  • Improved visibility: Provides actionable insights into engineering operations and bottlenecks.

Faros AI customers have reported measurable improvements in productivity, efficiency, and quality by leveraging unified data, AI-driven analytics, and workflow automation.

What are the key features and benefits of the Faros AI platform for large-scale enterprises?

  • Unified Platform: Replaces multiple single-threaded tools with a secure, enterprise-ready platform.
  • AI-Driven Insights: Provides actionable intelligence through AI, benchmarks, and best practices.
  • Seamless Integration: Compatible with existing tools and processes, ensuring minimal disruption.
  • Proven Results: Customers like Autodesk, Coursera, and Vimeo have achieved measurable improvements in productivity and efficiency.
  • Engineering Optimization: Improves speed, quality, and resource allocation across workflows.
  • Developer Experience: Unifies surveys and metrics for better insights.
  • Initiative Tracking: Keeps critical work on track with clear reporting.
  • Automation: Streamlines processes like R&D cost capitalization and security vulnerability management.

Key Content Summary

  • AI coding assistants boost individual developer output but do not yet translate to company-wide productivity gains.
  • Review bottlenecks, uneven adoption, and downstream system limitations absorb much of the value created by AI tools.
  • Faros AI’s research, based on large-scale telemetry, identifies actionable strategies and enablers for organizations to realize measurable ROI from AI adoption.

The AI Productivity Paradox Report 2025

Key findings from the AI Productivity Paradox Report 2025. Research reveals AI coding assistants increase developer output, but not company productivity. Uncover strategies and enablers for a measurable return on investment.

A report cover on a blue background. The cover reads:The AI Productivity Paradox: AI Coding Assistants Increase Developer Output, But Not Company Productivity. What Data from 10,000 Developers Reveals About Impact, Barriers, and the Path Forward
July 23, 2025

AI coding assistants increase developer output, but not company productivity

Generative AI is rewriting the rules of software development—but not always in the way leaders expect. While over 75% of developers are now using AI coding assistants, many organizations report a disconnect: developers say they’re working faster, but companies are not seeing measurable improvement in delivery velocity or business outcomes.

Drawing on telemetry from over 10,000 developers across 1,255 teams, Faros AI’s recent landmark research report confirms: 

  • Developers using AI are writing more code and completing more tasks
  • Developers using AI are parallelizing more workstreams
  • AI-augmented code is getting bigger and buggier, and shifting the bottleneck to review
  • Any correlation between AI adoption and key performance metrics evaporates at the  company level

This phenomenon, which we term the “AI productivity paradox,” raises important questions and concerns about why widespread individual adoption is not translating into significant business outcomes and how AI-transformation leaders should chart the road ahead. 

For engineering leaders looking to unlock AI’s full potential, the data points to both promising leverage and persistent friction. 

Our key findings continue below. 

#1 Individual throughput soars, review queues balloon

Developers on teams with high AI adoption complete 21% more tasks and merge 98% more pull requests, but PR review time increases 91%, revealing a critical bottleneck: human approval. 

AI‑driven coding gains evaporate when review bottlenecks, brittle testing, and slow release pipelines can’t match the new velocity—a reality captured by Amdahl’s Law: a system moves only as fast as its slowest link. Without lifecycle-wide modernization, AI’s benefits are quickly neutralized.

#2 Engineers juggle more workstreams per day

Developers on teams with high AI adoption touch 9% more tasks and 47% more pull requests per day. 

Historically, context switching has been viewed as a negative indicator, correlated with cognitive overload and reduced focus. 

AI is shifting that benchmark, signaling the emergence of a new operating model: in the AI-augmented environment, developers are not just writing code—they are initiating, unblocking, and validating AI-generated contributions across multiple workstreams. 

As the developer’s role evolves to include more orchestration and oversight, higher context switching is expected.

#3 Code structure improves, but quality worsens 

While we observe a modest correlation between AI usage and positive quality indicators (fewer code smells and higher test coverage from limited time series data), AI adoption is consistently associated with a 9% increase in bugs per developer and a 154% increase in average PR size.

AI may support better structure or test coverage in some cases, but it also amplifies volume and complexity, placing greater pressure on review and testing systems downstream. 

#4 No measurable organizational impact from AI

Despite these team-level changes, we observed no significant correlation between AI adoption and improvements at the company level. 

Across overall throughput, DORA metrics, and quality KPIs, the gains observed in team behavior do not scale when aggregated. 

This suggests that downstream bottlenecks are absorbing the value created by AI tools, and that inconsistent AI adoption patterns throughout the organization—where teams often rely on each other—are erasing team-level gains.

Four AI adoption patterns help explain the plateau

Even with rising usage, we identified four adoption patterns that help explain why team-level AI gains often fail to scale, namely: 

  1. AI adoption only recently reached critical mass. In most companies, widespread usage (>60% weekly active users) only began in the last two to three quarters, suggesting that adoption maturity and supporting systems are still developing. 
  2. Usage remains uneven across teams, even where overall adoption appears strong. And because software delivery is inherently cross-functional, accelerating one team in isolation rarely translates to meaningful gains at the organizational level.
  3. Adoption skews toward less tenured engineers. Usage is highest among engineers who are newer to the company (not to be confused with junior engineers who are new to the profession). This likely reflects how newer hires lean on AI tools to navigate unfamiliar codebases and accelerate early contributions. In contrast, lower adoption among senior engineers may signal skepticism about AI’s ability to support more complex tasks that depend on deep system knowledge and organizational context.
  4. AI usage remains surface-level. Across the dataset, most developers use only autocomplete features. Advanced capabilities like chat, context-aware review, or agentic task execution remain largely untapped. 

What should engineering leaders do next?

In most organizations, AI usage is still driven by bottom-up experimentation with no structure, training, overarching strategy, instrumentation, or best practice sharing. 

The rare companies that are seeing performance gains employ specific strategies that the whole industry will need to adopt for AI coding co-pilots to provide a measurable return on investment at scale.

Explore the full report to uncover these strategies plus the five enablers—workflow design, governance, infrastructure, training, and cross‑functional alignment—that prime your organization for agentic development.

AI Is Everywhere. Impact Isn’t.
75% of engineers use AI tools—yet most organizations see no measurable performance gains.

Read the report to uncover what’s holding teams back—and how to fix it fast.
AI Productivity Paradox Report 2025

Methodology Note

Background
This study analyzes the impact of AI coding assistants on software engineering teams, based on telemetry from task management systems, IDEs, static code analysis tools, CI/CD pipelines, version control systems, incident management systems, and metadata from HR systems, from 1,255 teams and over 10,000 developers across multiple companies. The analysis focuses on development teams and covers up to two years of history, aggregated by quarter, as teams increased AI adoption.

Definitions
We define AI adoption in this report as the usage of developer-facing AI coding assistants—tools including GitHub Copilot, Cursor, Claude Code, Windsurf, and similar. These are generative AI development assistants that integrate directly into the software development workflow—typically through IDEs or chat interfaces—to help developers write, refactor, and understand code faster. Increasingly, these tools are expanding beyond autocomplete to offer agentic modes, where they can autonomously draft pull requests, run tests, fix bugs, and perform multi-step tasks with minimal human intervention.

Approach
To isolate the relationship between AI adoption and engineering outcomes, we:

  • Standardized all metrics per company to remove inter-org variance
  • Used Spearman rank correlation (ρ) to assess relationships of metrics to AI usage 
  • Reported only those metrics with data from ≥6 companies and statistically significant correlations (p-value < 0.05)
  • For each team, we calculated the percent change in metric values between the two quarters with the lowest AI adoption and the two quarters with the highest
  • Excluded outlier data and metrics with insufficient historical coverage

This approach enables comparisons within each company over time and avoids misleading aggregate assumptions across different org structures.

Versioning note: This version of the report reflects analysis as of June 2025. Future editions may expand coverage as AI usage matures across more organizations and product features evolve.

About Faros AI

Faros AI improves engineering efficiency and the developer experience. By integrating data across source control, project management, CI/CD, incident tracking, and HR systems, Faros gives engineering leaders the visibility and insight they need to drive velocity, quality, and efficiency at scale. Enterprises use Faros AI to transform how software is delivered—backed by data, not guesswork.

Learn more at www.faros.ai

Discover the Engineering Productivity Handbook
How to build a high-impact program that drives real results.

What to measure and why it matters.

And the 5 critical practices that turn data into impact.
The cover of The Engineering Productivity Handbook on a turquoise background
Neely Dunlap

Neely Dunlap

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Connect
Want to learn more about Faros AI?

Fill out this form and an expert will reach out to schedule time to talk.

Loading calendar...
An illustration of a lighthouse in the sea

Thank you!

A Faros AI expert will reach out to schedule a time to talk.
P.S. If you don't see it within one business day, please check your spam folder.
Oops! Something went wrong while submitting the form.

More articles for you

Editor's Pick
AI
News
7
MIN READ

Translating AI-powered Developer Velocity into Business Outcomes that Matter

Discover the three systemic barriers that undermine AI coding assistant impact and learn how top-performing enterprises are overcoming them.
August 6, 2025
Editor's Pick
News
AI
DevProd
4
MIN READ

Faros AI Hubble Release: Measure, Unblock, and Accelerate AI Engineering Impact

Explore the Faros AI Hubble release, featuring GAINS™, documentation insights, and a 100x faster event processing engine, built to turn AI engineering potential into measurable outcomes.
July 31, 2025
Editor's Pick
AI
DevProd
5
MIN READ

Lab vs. Reality: What METR's Study Can’t Tell You About AI Productivity in the Wild

METR's study found AI tooling slowed developers down. We found something more consequential: Developers are completing a lot more tasks with AI, but organizations aren't delivering any faster.
July 28, 2025