Contact us
Tell us what you want to achieve with Faros AI and we’ll show you how.
Want to learn more about Faros AI?

Thank you!

You will get an email soon. Feel free to download Faros AI Community Edition.
Oops! Something went wrong while submitting the form.

Mean Time to Recovery (MTTR): A Key Metric in DevOps

Everything you need to know about Mean Time to Recovery (MTTR): A Key Metric in DevOps.

Natalie Casey
Natalie Casey
9
min read
Browse Chapters
Share
November 14, 2022

At Faros AI, we’re obsessed with DORA metrics. I mean, we created a full-blown guide on DORA metrics and covered the four metrics

In this post, we will cover the fourth but not the least metric: Mean Time to Recovery (MTTR). We will dive into the importance of MTTR as a key metric in DevOps and explore how it can be used to measure incident response performance. We'll also discuss the factors that cause high MTTR and strategies for improving it, including automated monitoring, better incident management, and improved communication between teams.

Without further ado, let’s get started.

What is Mean Time to Recovery (MTTR)?

Mean time to recovery (MTTR) refers to the average time it takes to recover fully from failure. It includes the entire outage time and time spent in-between testing, repair, restoration, and resolution. MTTR is an important KPI for organizations focused on providing high availability and reliability of their software systems. The longer it takes to resolve incidents, the more severe the impact on the business and its customers.

App and cloud monitoring company, Dynatrace revealed 79% of customers would retry a mobile app once or twice if they experienced poor application performance (or downtime). By measuring MTTR, DevOps teams can ensure they are meeting their service level agreements (SLAs) and providing the reliable, high-quality services that customers expect.

Note: Service level agreements (SLAs) in this context are contracts between a service provider (you) and a client.

Mean Time to Recovery vs. Other MTTR Metrics

If you could take out 1 minute to search ‘MTTR’ on Google search or Bing, you would see different meanings for MTTR, including ‘Mean Time to Repair’, ‘Mean Time to Resolve,’ and ‘Mean Time to Respond.’

They are all right!

MTTR usually stands for Mean Time to Recovery, but it represents other incident metrics, including:

  • Mean Time to Repair
  • Mean Time to Resolve
  • Mean Time to Respond

Let's quickly look at the other MTTR metrics to see their differences.

Mean Time to Repair

Mean time to repair is the average time it takes to repair a system till it is fully operational again. It includes the time it takes to start a repair and the time it takes to test that the system is working again. This takes into account the time it takes to:

  • Alert the engineering team
  • Diagnose the issue
  • Fix the issue
  • Test the system to make sure it's fully operational

To calculate:

MTTR = Sum of all time to repair / number of incidents.

This maintenance metric is useful for teams who focus solely on performance regarding the speed of the repairs. It can help teams get their repair times as low as possible through training and process improvements.

Mean Time to Resolve

Mean time to resolve is the average time it takes to resolve an incident/failure. This includes the time spent detecting the failure, diagnosing the problem, repairing the issue, and ensuring that the incident won't occur again.

To calculate:

MTTR = Sum of all time to resolve / number of incidents

This MTTR metric helps show how fast a team works to resolve an issue and ensure it never happens again.

Mean time to respond

Mean time to respond is the average time it takes a team to respond to an incident once they get their first alert to the issue. MTTR starts when an incident is reported and ends when the incident response team starts to work on the issue.

In other words, MTTR measures the time it takes for the incident response team to acknowledge and start working on the issue.

To calculate,

MTTR = Sum of all time to respond / number of incidents

Teams should use the mean time to respond metric to assess the effectiveness of their alertness and escalation process.

Why and how to measure mean time to recovery

As an engineering leader, you know how time-consuming and stressful resolving incidents are. Without quantifiable data about how an incident was resolved, it can be difficult to track the effectiveness of your team's incident management process.

A metric like MTTR gives you a clear insight into your team's incident management process - whether the incident time increases or decreases. Here are some reasons why you should take the MTTR metric seriously:

Helps track reliability

MTTR not only shows you how effective your incident management process is, but it also shows you how reliable your application is. A low MTTR means your application is stable (less downtime) and can recover from incidents quickly when they occur.

Identifying bottlenecks

By measuring MTTR, engineering leaders can identify bottlenecks in their development process. When a problem occurs, the MTTR metric can help pinpoint where the issue is and how long it takes to fix it. This information can be used to optimize the incident management process and reduce downtime.

Tracking incident management progress

Once you've pinpointed the improvements that need to be made and started optimizing your process, the MTTR is a great metric to know if you're on the right track. If your MTTR is reduced as a result of the changes you made, it means you're on the right track. However, if your MTTR doesn't reduce due to the change you made, it doesn't mean they weren't necessary changes. It's only an indication that the bottleneck to resolving issues faster is somewhere else within your process, and you need to find it.

​​Now that we have established the importance of measuring MTTR, let's discuss how to measure it:

  • Establish the incident: Teams need to define what constitutes an outage or incident. This could include app downtime, customer complaint, system alert, or any other trigger that indicates an issue has occurred.
  • Record the time: The time taken to resolve the incident should be recorded accurately. This includes the time taken to detect, diagnose, and resolve the issue. Many teams use tools to create tickets when a failure is reported. Tickets are generally created manually but can also be automated with monitoring systems. The most important thing is recording the time when the incident started until it's resolved - for full transparency.
  • Calculate MTTR: Once the data is collected, MTTR can be calculated by taking the total time to resolve the incident and dividing it by the number of incidents. For instance, if your app was down for 1 hour (60 minutes) in a week and there were 2 separate incidents, you would divide 60 by 2. Your MTTR would then be 30 minutes.
  • Analyze the data: Analyzing the data will provide insights into incident response performance, including areas that need improvement.

What is a good MTTR?

According to the 2022 State of DevOps Report, high-performing teams typically recover from incidents or failures in less than a day. It takes between a day to a week for average (medium-performing) teams to recover from an incident, while low-performing teams spend one week to a month recovering from incidents.

Source: 2022 State of DevOps Report

The lower the MTTR, the better the software delivery performance because the organization can quickly identify and resolve issues that impact the system or product.

Remember, high-performing teams can recover within a few hours, and every second in the recovery period counts. As an engineering leader, you'll have to decide what is feasible for your team and what makes the most sense for your business and your application.

It's best to start by establishing your team's current MTTR. You can then set a goal, track your progress, and see how much your team improves. If the team meets the goal, you can set a new one. If the goal was too ambitious, scale it back. The specific goal is not as important as driving toward improvement.

What causes high MTTR?

Here are some factors that can cause a high MTTR in a DevOps environment:

Lack of planning

“He who fails to plan is planning to fail” - Winston Churchill.

What happens when a fault has been detected and acknowledged? Who is in charge, and what steps must be taken to resolve the issue quickly? These are questions you should ask yourself (and your team) as an engineering leader.

Don't wait till the incident happens before you start planning. Imagine your DevOps team quickly detects an incident, but they don't know where to start. Sarah and Rick are engineers who know how to perform deployments (manually), but they don't know who is in charge. Should Sarah do it? Should Rick do it? When you don't plan ahead of incidents, there'll be confusion - which is bad for your team and customers.

Departmental Silos

Silos in the engineering department can contribute to high MTTR by creating barriers to communication and collaboration between teams. When different teams work in isolation and do not communicate effectively, it can lead to longer resolution times for problems.

For example, if a system failure occurs, different teams may be responsible for different components of the system. If those teams don't have good communication and collaboration processes in place, it can lead to delays in identifying the root cause of the issue and implementing a fix.

Manual deployment process

In our article about deployment frequency, we mentioned that one of the reasons for low deployment is lack of automation (manual processes). A manual deployment process requires human intervention to manage and deploy changes, which can be time-consuming and prone to errors. A manual deployment not only affects deployment frequency (because it takes time for engineers to deploy changes), but it also negatively impacts MTTR for the same reason.

How to reduce MTTR

Once you've identified that your MTTR is higher than you would like it to be, you need to take steps to improve it. Here are some steps you can take to reduce your MTTR:

  • Implement continuous integration/continuous delivery (CI/CD) systems to automate monitoring and failure detection. Automated monitoring can help identify issues before they become critical and help teams respond more quickly.
  • Improve communication among team members during the incident response process to reduce delays and ensure that everyone is informed of the status of the recovery efforts.
  • Be prepared for any incident. Develop standard operating procedures and playbooks that define the steps to follow in the event of an incident. These materials should be given to all developers working on the project so they are prepared to respond to incidents quickly.

Overall, reducing MTTR requires implementing automation, standardizing procedures, improving communication, and ensuring that team members are prepared to respond to incidents quickly and effectively.

Final Thoughts on Mean Time to Recovery

Mean Time to Recovery (MTTR) is a key metric that helps teams to improve their processes and reduce downtime. However, It's important to remember that while reducing MTTR is important, it should not come at the expense of quality or stability - MTTR works best alongside other DORA metrics.

Faros AI makes it easy to implement monitoring systems and start tracking and improving DORA metrics. Check us out for free with Faros Essentials, where you can access Git + Jira metrics in 10 minutes.

Contact us
Tell us what you want to achieve with Faros AI and we’ll show you how.
Want to learn more about Faros AI?

Thank you!

You will get an email soon. Feel free to download Faros AI Community Edition.
Oops! Something went wrong while submitting the form.

More articles for you

Editor's Pick
Guides
7
MIN READ

All you need to know about the DORA metrics, and how to measure them

All you need to know about the DORA metrics, and how to measure them. | Faros.ai
November 6, 2024
A 3-way gauge depicting the GitHub Copilot logo within the Launch-Learn-Run framework. GitHub Copilot Best Practice Essentials written at top.
Editor's Pick
Guides
AI
5
MIN READ

GitHub Copilot Best Practices for Optimizing Impact

Maximize your return with a complete guide to GitHub Copilot best practices.
October 22, 2024
A 3-way gauge depicting the GitHub Copilot logo within the Launch-Learn-Run framework. Focus on Phase 1: GitHub Copilot Best Practices: Launch
Editor's Pick
Guides
AI
4
MIN READ

How to Increase GitHub Copilot Adoption and Usage — Best Practices

Strategies to increase GitHub Copilot adoption to turn usage into impact.
October 22, 2024

See what Faros AI can do for you!

Global enterprises trust Faros AI to accelerate their engineering operations. Give us 30 minutes of your time and see it for yourself.