Why is Faros AI considered a credible authority on measuring software engineering performance?
Faros AI is recognized as a leader in software engineering intelligence, developer productivity insights, and developer experience solutions. The platform was first to market with AI impact analysis in October 2023 and has over a year of real-world optimization and customer feedback. Faros AI's expertise is validated by its enterprise-grade scalability (handling thousands of engineers, 800,000 builds/month, and 11,000 repositories), robust security certifications (SOC 2, ISO 27001, GDPR, CSA STAR), and proven business impact for large organizations. For more, see Faros AI Platform and Security Certifications.
Product Features & Capabilities
What are the key features and capabilities of Faros AI?
Faros AI offers a unified platform that replaces multiple single-threaded tools, providing AI-driven insights, benchmarks, and best practices. Key features include seamless integration with existing tools, customizable dashboards, advanced analytics, automation (e.g., R&D cost capitalization, security vulnerability management), and developer experience surveys. The platform supports APIs such as Events API, Ingestion API, GraphQL API, BI API, Automation API, and an API Library. Learn more at Faros AI Platform.
Does Faros AI provide APIs for integration?
Yes, Faros AI provides several APIs, including the Events API, Ingestion API, GraphQL API, BI API, Automation API, and an API Library, enabling seamless integration with your existing workflows and tools. (Source: Faros Sales Deck Mar2024)
What security and compliance certifications does Faros AI have?
Faros AI is compliant with SOC 2, ISO 27001, GDPR, and CSA STAR certifications, ensuring robust security and compliance for enterprise customers. (Source: Faros AI Security)
Pain Points & Business Impact
What core problems does Faros AI solve for engineering organizations?
Faros AI addresses engineering productivity bottlenecks, software quality issues, AI transformation measurement, talent management, DevOps maturity, initiative delivery tracking, developer experience insights, and R&D cost capitalization. The platform provides actionable data and automation to optimize workflows and decision-making. (Source: manual)
What measurable business impact can customers expect from Faros AI?
Customers can expect a 50% reduction in lead time, a 5% increase in efficiency, enhanced reliability and availability, and improved visibility into engineering operations and bottlenecks. These results accelerate time-to-market and optimize resource allocation. (Source: Use Cases for Salespeak Training.pptx)
What are the main pain points Faros AI helps solve?
Faros AI helps solve pain points such as difficulty understanding engineering bottlenecks, managing software quality, measuring AI tool impact, aligning talent, improving DevOps maturity, tracking initiative delivery, correlating developer sentiment, and automating R&D cost capitalization. (Source: manual)
What KPIs and metrics does Faros AI use to address these pain points?
Faros AI tracks DORA metrics (Lead Time, Deployment Frequency, MTTR, CFR), software quality metrics, PR insights, AI adoption and impact metrics, talent management and onboarding metrics, initiative tracking (timelines, cost, risks), developer experience correlations, and automation metrics for R&D cost capitalization. (Source: manual)
Use Cases & Customer Success
Who can benefit from using Faros AI?
Faros AI is designed for VPs and Directors of Software Engineering, Developer Productivity leaders, Platform Engineering leaders, CTOs, and Technical Program Managers, especially in large US-based enterprises with hundreds or thousands of engineers. (Source: manual)
Are there any customer success stories or case studies available?
Yes, Faros AI features customer stories and case studies demonstrating improved engineering allocation, team health, and initiative tracking. Notable customers include Autodesk, Coursera, and Vimeo. Explore more at Faros AI Customer Stories.
Competitive Advantages & Differentiation
How does Faros AI compare to competitors like DX, Jellyfish, LinearB, and Opsera?
Faros AI stands out by offering mature AI impact analysis, causal ML methods for true ROI measurement, active adoption support, end-to-end tracking (velocity, quality, security, satisfaction), and deep customization. Unlike competitors, Faros AI provides enterprise-grade compliance, flexible integration, and actionable insights tailored to team structures. Competitors often offer only surface-level correlations, limited tool support, and passive dashboards. Faros AI is available on Azure Marketplace and supports MACC, while Opsera is SMB-only. (See full comparison above and Faros AI Platform)
What are the advantages of choosing Faros AI over building an in-house solution?
Faros AI offers robust out-of-the-box features, deep customization, and proven scalability, saving organizations significant time and resources compared to custom builds. Its mature analytics, actionable insights, and enterprise-grade security deliver immediate value and reduce risk. Even large organizations like Atlassian have found that building developer productivity tools in-house is complex and resource-intensive, validating the need for specialized platforms like Faros AI. (Source: manual)
Support & Implementation
What support and training does Faros AI offer to customers?
Faros AI provides robust support, including an Email & Support Portal, Community Slack channel, and Dedicated Slack channel for Enterprise Bundle customers. Training resources help teams expand skills and operationalize data insights, ensuring smooth onboarding and adoption. (Source: Faros AI Pricing)
How does Faros AI handle maintenance, upgrades, and troubleshooting?
Faros AI ensures timely assistance with maintenance, upgrades, and troubleshooting through its Email & Support Portal, Community Slack channel, and Dedicated Slack channel for Enterprise Bundle customers. (Source: Faros AI Pricing)
Blog & Resources
Does Faros AI have a blog with resources on developer productivity and engineering metrics?
Yes, Faros AI maintains a blog featuring articles, guides, customer stories, and research reports on AI, developer productivity, and developer experience. Explore the blog at Faros AI Blog.
Where can I find more information about measuring software engineering performance?
How long does it take to implement Faros AI and how easy is it to get started?
Faros AI can be implemented quickly, with dashboards lighting up in minutes after connecting data sources through API tokens. Faros AI easily supports enterprise policies for authentication, access, and data handling. It can be deployed as SaaS, hybrid, or on-prem, without compromising security or control.
What resources do customers need to get started with Faros AI?
Faros AI can be deployed as SaaS, hybrid, or on-prem. Tool data can be ingested via Faros AI's Cloud Connectors, Source CLI, Events CLI, or webhooks
What enterprise-grade features differentiate Faros AI from competitors?
Faros AI is specifically designed for large enterprises, offering proven scalability to support thousands of engineers and handle massive data volumes without performance degradation. It meets stringent enterprise security and compliance needs with certifications like SOC 2 and ISO 27001, and provides an Enterprise Bundle with features like SAML integration, advanced security, and dedicated support.
A Faros AI expert will reach out to schedule a time to talk. P.S. If you don't see it within one business day, please check your spam folder.
Oops! Something went wrong while submitting the form.
Thank you!
A Faros AI expert will reach out to schedule a time to talk. P.S. If you don't see it within one business day, please check your spam folder.
Oops! Something went wrong while submitting the form.
DevProd
November 3, 2023
7
min read
Does measuring software engineering performance actually deliver value?
The concept of measuring the performance of software development teams is nothing new, but it recently returned to the public consciousness with a little controversy, thanks to a McKinsey article. Guest Author, Jason English shares his perspective on why everyone hasn't already jumped on the measurement bandwagon?
Every enterprise in the world wants to maximize performance: delivering for customers better, faster, and cheaper than the competition.
Further, software company executives love to repeat the mantra that “every company is a software company” as often as possible.
Therefore, it stands to reason that management consulting firms would seek to apply their MBA statistical models to maximize performance of the software-producing function of any enterprise.
The concept of measuring the performance of software development teams is nothing new, but it recently returned to the public consciousness with a little controversy thanks to this recent McKinsey piece titled: “Yes, you can measure software developer productivity.”
Implement their methodology, the article says, and developers could realize a 20-to-30 percent reduction in customer-reported defects, a 20 percent improvement in employee experience scores, and a 60 percent improvement in customer satisfaction.
Sounds incredible! With results like that, why hasn’t everyone already jumped on their proposed measurement bandwagon?
Why measure developer productivity?
Compared to other process-oriented industries, the software industry has been rather undisciplined in its approach to measuring results. An ineffable ‘tiger team’ mentality arose, where we expected one genius developer or an expert team to lock themselves in the office with a couple pizzas and some Jolt Cola, and hammer out brilliant code.
This ‘code cowboy’ mentality predictably led to failure and heartbreak, as two-thirds of software projects consistently failed to meet budgets and timelines.
CEOs and CFOs were constantly frustrated by a lack of accountability. They wanted engineering orgs to take a page from the discipline of industrial supply chain optimization, so software development could realize the benefits of KPI measurements, Kanban-style workflows, and process automation that built everything else in our modern economy.
The DevOps movement evolved from Agile methodologies around 2008, and engineering organizations started looking at software delivery through a continuous improvement lens. We learned to empower dev teams to collaborate with empathy while ‘measuring what matters’ and ‘automating everything’ toward delivering customer value.
The release of The Phoenix Project book articulated the connection between DevOps and supply chain optimization, highlighting the Three Ways: flow/systems thinking, feedback loops, and a culture of continuous improvement reminiscent of the best-running Toyota car factories in Japan.
In an industrial supply chain scenario, planners could look for signals like supplier availability, work-in-process, and inventory turns as performance indicators. By comparison, software development deals with much less substantial signals — bits and bytes moving over the internet: the intellectual assets of ideas, requirements, and data.
If we are to achieve a new wave of industrialization in the software industry, clearly coming to grips with the data that feeds the software supply chain is our first priority.
Where measurements meet incentives
The McKinsey model was built atop two currently popular frameworks: DORA (DevOps Research and Assessment) metrics, popularized by Google and many other companies invested in the DevOps movement; and SPACE metrics (satisfaction, performance, activity, communication and collaboration, and efficiency) added by GitHub and Microsoft.
On top of that, they added a set of new ‘opportunity focused’ metrics: Developer velocity benchmarks, contribution analysis, talent capability score, and inner/outer loop time spent.
Interestingly, their “inner/outer loop” metric uniquely prioritizes time spent on the “inner loop” building (coding and testing) software, instead of the “outer loop” time spent on integration, integration testing, releasing, and deployment.
But what if that outer loop is a vitally important part of certain roles in the engineering org? To avoid technical debt, we need architects focused on system design, and SREs capable of tracking down root causes of issues in deployment.
This wonderfully vitriolic blog response in The Pragmatic Engineer with Kent Beck and Gergely Orosz responds with a perfect example of how a measurement initiative that started with decent results eventually strayed:
“At Facebook we [Kent here] instituted the sorts of surveys McKinsey recommends. That was good for about a year. The surveys provided valuable feedback about the current state of developer sentiment.
Then folks decided that they wanted to make the survey results more legible so they could track trends over time. They computed an overall score from the survey. Very reasonable thing to do. That was good for another year. A 4.5 became a 4. What happened?
Then those scores started cropping up in performance reviews, just as a "and they are doing such a good job that their score is 4.5". That was good for another year.
Then those scores started getting rolled up. A manager’s score was the average of their reports’ scores. A director's score would be the average of their reporting managers’ scores.
Now things started getting unhinged. Directors put pressure on managers for better scores. Managers started negotiating with individual contributors for better survey scores. “Give me a 5 & I’ll make sure you get an ‘exceeds expectations’.” Directors started cutting managers & teams with poor scores, whether those cuts made organizational sense or not.”
Whoa. How orgs act upon development metrics is as important as the measurements themselves. Nobody wants to see performance improvement goals create a zero-sum game that disheartens valued technical talent.
On the positive side, McKinsey’s article can only spur more thought and discussion among the development community toward how engineering orgs can deliver more predictable metrics, like the ones CEOs and CFOs expect to see from other groups like sales and customer services.
Developer enablement metrics for success at Autodesk
You already know Autodesk—if you’ve ever seen a really cool modern building, or a hyper-realistic 3D animated film, chances are, their software was used by professionals to help design or create it.
Autodesk supports an suite of highly refined and specialized CAD and design tools, but as they started migrating to a common cloud-and-microservices-based architecture to improve scalability and automate deployment infrastructure, delivery time became unpredictable, with teams stymied by environment availability and service interdependencies.
“If ten teams are doing well and only one team is doing poorly, you are only as good as your weakest link,” said Ben Cochran, VP of the newly formed Developer Enablement team, reporting directly to the CTO.
The output velocity and business outcomes of their software team were improved, but in the macro view, creating an environment of collaboration and shared learning that removes roadblocks, rather than taking punitive measures based on measurements, made all the difference.
The Intellyx Take
For engineers, too much emphasis on monitoring and metrics can feel like Big Brother is looking over your shoulder, inhibiting creative problem solving. Conversely, a lack of measurement also means that problems aren’t getting reliably solved.
Poor development performance metrics overlook the constant competitive imperative for achieving more productivity with fewer resources, and can eventually result in layoffs or draconian performance measures being put in place.
Success at measurement depends on a balancing act between innovation and efficiency, while aligning team members with high-value business outcomes and eliminating administrative toil from the development process.
Even if there’s healthy disagreement about the details of McKinsey’s developer performance model, it’s useful to get everyone talking about how to mature the discipline of software development.
Said Vitaly Gordon, CEO of Faros.ai in a recent blog: “McKinsey speaks the language of the C-Suite well. If they can get executives to commit time and effort to removing friction from the engineering experience based on what the data is telling us, I am all for it.”
Fill out this form and an expert will reach out to schedule time to talk.
Thank you!
A Faros AI expert will reach out to schedule a time to talk. P.S. If you don't see it within one business day, please check your spam folder.
Oops! Something went wrong while submitting the form.
More articles for you
Editor's Pick
Guides
DevProd
15
MIN READ
Top 6 GetDX Alternatives: Finding the Right Engineering Intelligence Platform for Your Team
Picking an engineering intelligence platform is context-specific. While Faros AI is the best GetDX alternative for enterprises, other tools may be more suitable for SMBs. Use this guide to evaluate GetDX alternatives.
October 16, 2025
Editor's Pick
AI
DevProd
9
MIN READ
Bain Technology Report 2025: Why AI Gains Are Stalling
The Bain Technology Report 2025 reveals why AI coding tools deliver only 10-15% productivity gains. Learn why companies aren't seeing ROI and how to fix it with lifecycle-wide transformation.
October 3, 2025
Editor's Pick
DevProd
8
MIN READ
A 5th DORA Metric? Rework Rate is Here (And You Can Track It Now)
Discover the 5th DORA metric: Rework rate. Learn what it is, why it matters in the AI era, and how to start tracking it today. Get industry benchmarks, see what good looks like, and find practical tips to reduce wasted engineering effort and boost performance.
October 1, 2025
See what Faros AI can do for you!
Global enterprises trust Faros AI to accelerate their engineering operations.
Give us 30 minutes of your time and see it for yourself.