Machine learning models signal when it’s time to pay down technical debt.
Code complexity is nearly unavoidable in the modern software development landscape. As businesses innovate to satisfy rising demands, the introduction of new features gradually increases code complexity over time. If this complexity is not addressed, it escalates and compounds, increasing bugs and technical debt while decreasing developer productivity.
While tools now exist to prevent increasing code complexity at the individual code change level, many companies still struggle to address large existing code complexity issues due to the time-consuming nature, substantial expenses, and inherent risks associated with refactoring coding systems.
So how do you know when code complexity becomes a main contributor to lost developer productivity? When does it become time to address this issue head-on and prioritize the simplification process?Machine learning models may provide the answer.
Recent R&D from Faros AI into developer productivity analytics, automated issue detection, and the ranking of potential causes is highlighting when code complexity is becoming a blocker.
Code complexity refers to the intricacy and sophistication of a software program, defined by the ease or difficulty of understanding, modifying, and maintaining the code. There are two main types of code complexity: cyclomatic and cognitive.
As both cyclomatic and cognitive complexity increase, so does the impact on developer productivity. Complex codebases are more prone to bugs and unexpected behavior, often forcing developers to divert time and energy from important feature work to debug and troubleshoot issues.
Furthermore, when codebases are overly complex, developers must spend more time and effort trying to understand the existing system, identify dependencies, and determine the safest way to make even small changes.
The cognitive burden of working with highly complex code can lead to developer fatigue and frustration, hampering their motivation and focus, while frequent context switching between different parts of a sprawling codebase slows down their ability to implement new features or enhancements efficiently.
Code complexity increases as software evolves. As a codebase grows, the increase in code volume naturally leads to greater complexity. Higher numbers of dependencies and multiple execution paths will require more debugging and higher maintenance tasks. Even the most well-written, well-organized code will become harder to manage over time, which is why this issue is nearly unavoidable.
Aside from volume, a host of other practices and processes across the software development lifecycle can contribute to code complexity. Code complexity can arise from:
When left unchecked, all of these elements can lead to long-term, systemic coding complexity issues that are difficult to resolve.
To proactively manage code complexity and avoid its compounding effects, there are several best practices companies can follow.
Cohesion and coupling are key concepts in software design that significantly impact code complexity.
The ideal scenario is to achieve high cohesion within modules while maintaining low coupling between them. This balance ensures that each module is focused and self-contained, and changes in one module have minimal impact on others. Managing these aspects effectively leads to more maintainable, less complex code.
Static code analysis involves examining the source code of a program to identify potential vulnerabilities, errors, or deviations from prescribed coding standards. Types of static code analysis tools include bug finders, security scanners, type checkers, complexity analyzers, dependency checkers, and duplicate code detectors—all designed to address specific dimensions of code quality, security vulnerabilities, and maintainability challenges.
Tools such as Codacy and Sonar offer immediate feedback during the development process and can be integrated and automated in two main ways:
Whenever a PR is submitted or code is merged, these tools perform checks to ensure the new code is free of vulnerabilities and meets quality standards, helping to minimize code complexity by identifying issues early and keeping the codebase consistent.
Sometimes, such as when using a mono-repo model, two separate code updates are reviewed at the same time. They both pass separate static code analysis and are merged into the main branch, seeming completely fine on their own. But, once introduced together, new integration challenges may arise and cause breakage in the mainline. While routine checks are conducted in the mainline, they are not typically a part of the pull request process—thus, the impacts aren’t immediately evident, but are felt when breakages occur further down in the development process and increase coding complexity.
To manage and prevent this, you can set up an additional step to automatically test the main branch whenever changes are made and block the release until any issues are fixed. This strategy helps control code complexity by catching integration issues early and reducing the risk of compounding problems, thus ensuring a cleaner, more reliable codebase.
By the time you come across this article, you’re probably aware of the high code complexity in your systems, but you’ve postponed addressing it to focus on customer-facing priorities. While understandable, it is important to identify when high code complexity is impacting developer productivity to a point it’s having a significant impact on the business (and customers) in terms of:
How do you determine when coding complexity becomes a significant factor negatively impacting productivity when there are multiple factors at play?
Devoting multiple cycles, months, or—let’s be honest—years to rearchitecting and refactoring code is not a decision made lightly. But it is necessary if it’s the number one factor impacting key performance metrics.
In the past, companies looking to understand the impact of their high code complexity turned to human data analysts to parse through complex code and make recommendations. Imagine some poor soul tasked with manually combing through mountains of code, making dozens of dashboards to look at metrics for every team, and comparing these metrics to factors like Jira tickets, team seniority, number of services owned, deployments per week—and every other factor of influence—and then trying to decide which of these hundreds of factors is actually causing their slow lead time. Not only is this impractical, but it's also a huge drain on time and money to try and understand the code complexity’s impact and potential causes in this manner.
But now, machine learning solutions, like those developed by Faros AI, offer a better way.
Faros AI uses machine learning to ingest and analyze data from numerous key performance indicators, such as change failure rate, lead time for change, pull requests, cycle time, successful deployments, and incident resolution times, alongside cyclomatic complexity scores from tools like Codacy and Sonar.
This data is then examined across teams to identify significant differences and uncover potential causes for the discrepancies. Faros AI identifies correlations across conditions to pinpoint if high code complexity is the main contributor. For example, if PR cycle times are increasing rapidly and high code complexity is identified as a key factor, this indication provides leaders with a more concrete piece of evidence that it may be time to address the issue.
Furthermore, Faros AI’s platform can juxtapose these code complexity insights with developer survey data. If developers report coding complexity issues in surveys and this feedback aligns with the quantitative data, this combined picture gives leaders a compelling reason to consider tackling this compounding challenge and address it more effectively.
As many engineering organizations are adopting AI coding assistants, it’s critical to understand their impact on code complexity. Geekwire published an article exploring findings from a research project on AI copilots and the impact on code quality conducted by GitClear. Their findings indicate that while AI coding assistants make adding code simpler and faster, they can also cause decreases in quality through:
These practices are generally seen as a negative indicator of code complexity. If your engineering organization is using AI copilots, Faros AI can illuminate this “AI-induced tech debt” and demonstrate its impact on downstream metrics. Armed with this insight, engineering leaders can take steps to mitigate these issues and promote better processes to support the ongoing health and manageability of their codebases.
Whether or not you decide to embark on a refactoring and simplification initiative, it’s imperative you’re aware of how code complexity is affecting your development teams. If you know it’s time to take action but you’re unsure where to start, or if you’re just curious to see how much longer you can sweep increasing code complexity under the rug (jokes), Faros AI’s engineering intelligence solutions can provide you with the answers for informed decision-making.
Request a demo to learn more.
Global enterprises trust Faros AI to accelerate their engineering operations. Give us 30 minutes of your time and see it for yourself.