• Platform
  • Copilot Impact
  • DORA Metrics
  • Resources
    Sign In
    Get a Demo
GuidesAI

GitHub Copilot Best Practices for Optimizing Impact

Maximize your return with a complete guide to GitHub Copilot best practices.

Naomi Lurie

Browse chapters

1
GitHub Copilot Best Practices for Optimizing Impact

Share

October 16, 2024

GitHub Copilot Best Practices for Optimizing Impact

Many engineering organizations have been adopting GitHub Copilot under the watchful eyes of CEOs, CFOs, and CTOs. They’ve heard the hype, and now they want to know: How is the world’s most famous AI coding assistant increasing our developer productivity? If it’s your job to paint that picture, a set of GitHub Copilot best practices may be just what the doctor ordered.

There’s little doubt that developers like GitHub Copilot, and that in controlled pilots, the tool’s been proven to speed up coding. But at the organizational level, many questions remain unanswered:

  • Adoption and Usage: How well is Copilot being adopted? How often is it being used? Do we have the right amount and type of licenses? Have we conducted sufficient training and developer enablement?
  • Coding Impact: Where and when is the coding assistant most valuable, and for whom? How has it impacted developer satisfaction? How has developer productivity changed for those with licenses vs. their non-augmented peers?
  • Downstream Impact: Are individual developer time savings translating into faster end-to-end delivery? How are bottlenecks shifting? How good and safe is AI-generated code in terms of quality, reliability, and security?

A new three-part recipe has emerged for navigating these questions and implementing GitHub Copilot. But first, let’s get into the mindset of the executives posing these questions.

Why Measuring GitHub Copilot’s ROI is Essential in Today’s Economy

Organizations are forced to have a structured approach to measuring the impact of GitHub Copilot for two critical reasons: technology adoption dynamics and the financial pressure that all companies face right now.

First, not everyone is an early adopter. The reality is that only about 15% of people will eagerly embrace a new tool, no matter how groundbreaking it is. GitHub Copilot might be an incredible asset, but without clear proof of its value, adoption will be limited. The key to increasing adoption lies in demonstrating ROI. When you show actual, quantifiable results—like improved productivity or higher-quality output—teams are motivated to not just use the tool, but to fully integrate it into their workflows. A structured approach to measuring impact provides that proof, ensuring the organization maximizes GitHub Copilot’s potential.

Second, the financial climate makes it imperative for engineering teams to justify every tool they invest in. Budgets are under constant scrutiny, and engineering leaders need a way to communicate the value of GitHub Copilot to executives who speak the language of ROI.

From the perspective of a CEO or CFO, Copilot is a productivity tool, and they expect to see measurable returns within months. Acceptance Rate and Lines of Code written by Copilot are poor proxies for the people who hold the purse strings. Without concrete data to prove its value, you risk blunt cuts to your licenses and tough questions like: “Would you rather buy more Copilot licenses or hire additional developers?” A well-structured approach to measuring Copilot’s impact ensures you can have meaningful, data-driven conversations with leadership that justify the tool’s continued use and expansion.

GitHub Copilot Best Practices: Launch-Learn-Run Framework

Many enterprises have adopted the field-proven Launch-Learn-Run framework for their Copilot journey. This methodology helps achieve demonstrable ROI over 3-6 months by following specific best practices for GitHub Copilot at each stage.

overview and timeline for the Launch Learn Run framework

overview and timeline for the Launch Learn Run framework

Here's how the process unfolds:

  • Launch (6 weeks): Gather early signals of adoption and usage. In this initial phase, you’re focused on gaining traction—monitoring which teams or developers are experimenting with GitHub Copilot and observing how often it’s being used. Pay attention to basic usage patterns, power users, and unused licenses to build a foundation for future insights.
  • Learn (~3 months): Conduct regular developer surveys to understand both time savings and the overall sentiment around GitHub Copilot. This is also the ideal moment to run A/B tests comparing metrics between developers using Copilot and those who are not. Some organizations also trial different license levels, such as Business or Enterprise, to see which version delivers more value. By the end of this phase, you’ll have a clear picture of before-and-after performance metrics for the developers using Copilot.
  • Run (6+ weeks and ongoing): By now, GitHub Copilot adoption has increased, making it possible to observe the downstream impacts on collective outcomes beyond individual productivity gains. This phase focuses on measuring key performance indicators like Lead Time, Change Failure Rate (CFR), Number of Incidents, and Mean Time to Recovery (MTTR).

Read the next chapters for a deep dive into each phase’s best practices, benchmarks, and insights:

Back to blog posts

More articles for you

See what Faros AI can do for you!

Global enterprises trust Faros AI to accelerate their engineering operations.
Give us 30 minutes of your time and see it for yourself.

Get a Demo