The AI Productivity Paradox in Software
AI is everywhere in software development, yet measurable productivity gains remain elusive. What's causing this productivity paradox?
The AI Productivity Paradox in Software
Artificial intelligence is being absorbed into software development at extraordinary speed. Yet there is little clear evidence, so far, of a corresponding step change in productivity or output.
In software development, the verdict on artificial intelligence appears to have been delivered in advance. Copilots are embedded across coding environments, model capabilities continue to improve, and the expectation is widely shared: productivity will rise.
The logic feels straightforward. If generating code becomes faster, easier and more accessible, then more software should be produced. Teams should move more quickly. Costs should fall. The long-standing constraints of development, such as time, labour, and complexity, should begin to loosen.
Yet revolutions in complex systems rarely arrive in such a linear way. They tend instead to appear first in behaviour, then in process, and only later in measurable outcomes. The question, then, is not whether AI is being adopted. The more interesting question is whether that adoption is yet visible in the outputs that matter.
Adoption is No Longer in Doubt
By almost any measure, AI tools have crossed into the mainstream of software development. According to the 2025 Stack Overflow Developer Survey, 84% of respondents are already using or planning to use AI tools in their workflow, up from 76% the previous year. Among professional developers, daily usage has reached 51%, a striking signal of how quickly these tools have moved from novelty to routine.
AI tools in the development process
This is not tentative experimentation. It is widespread integration. AI is now present not just at the margins of development, but at its centre, shaping how code is written, reviewed and refined.
At the same time, adoption is uneven across categories. While general AI-assisted coding tools are widely used, more advanced concepts such as autonomous agents remain far from mainstream, with a majority of developers either not using them or expressing no intention to do so. This suggests that “AI in development” is not a single phenomenon, but a layered one, with different levels of maturity and trust.
Even so, the direction of travel is clear. AI tools are being absorbed into everyday practice at remarkable speed, and the expectation that they will transform productivity has become embedded alongside them.
Repositories and a Missing Output Trend
If AI were already delivering a step-change in productivity, we might expect to see it reflected in the rate at which software is being created. One of the few available proxies for this is activity on GitHub, where the number of repositories provides a rough measure of global software production.
Data from GitHub’s Innovation Graph shows a steady increase in the total number of repositories over time, across major regions including the United States, Europe and India. Software production is clearly growing.
What is less clear is any sign of acceleration. The period following the rapid rise of generative AI in 2022 does not show a distinct inflection point in repository growth. Instead, the trend continues along a broadly consistent trajectory, suggesting that while more software is being created, it is not yet being created at a dramatically faster rate.
This is, at the very least, unexpected. The widespread adoption of tools explicitly designed to increase productivity has not, so far, translated into a visible shift in one of the most basic indicators of output.
That does not mean no change is occurring. Repository counts are an imperfect proxy, and many forms of software work, particularly within private organisations, are not captured here. But taken at face value, the data points to continuity rather than disruption.
Uncertain Signals: Microsoft and Adobe
If the effects of AI are not yet visible in aggregate output, they might instead appear in the performance of the companies building and selling these tools. Public filings offer a useful, if imperfect, window into this layer. Unlike marketing material, they are constrained by disclosure requirements and tend to reflect what firms can substantiate.
If AI is delivering meaningful productivity improvements, we might expect to see early signs appearing in familiar places, such as improving margins, slowing cost growth, or shifts in hiring patterns.
Certainly, across major software companies, the language around AI is unambiguous. In its latest quarterly filing, Microsoft repeatedly frames artificial intelligence as a driver of productivity and efficiency, highlighting its integration across products and workflows. Adobe’s annual report adopts a similar tone, positioning AI as a core component of its future growth and a means of enhancing user productivity.
Yet the financial signals themselves are more measured. Growth remains strong, particularly in cloud and subscription services, but it largely follows existing trajectories. Revenue continues to rise at double digit rates, consistent with the expansion patterns established before the current wave of AI adoption. At the same time, companies emphasise the scale of ongoing investment required to build and operate AI systems, particularly in infrastructure and compute.
In other words, AI is clearly being integrated and monetised, but its impact appears incremental rather than transformative, at least in the near term. There is little in these disclosures to suggest a sudden shift in productivity at the level of the firm.
There is also scant evidence, in public disclosures, of a clear shift in labour dynamics. While AI is frequently associated with efficiency and productivity in strategic language, companies continue to emphasise hiring, investment in talent, and the importance of skilled employees. If AI is reducing the need for labour, that effect is not yet clearly visible in these signals.
The AI Productivity Gap: Adoption but no Uptick
Taken together, this suggests a more nuanced picture. AI is certainly present across products and workflows, and may be delivering local efficiencies within specific systems. Developers are incorporating these tools into their daily work, and companies are embedding them across products and platforms.
But at the level of outcomes, the shift is far less clear. Software production continues to grow, yet without a visible increase in pace. Company performance remains strong, yet broadly in line with pre-existing trends. The expected link between adoption and productivity has not yet asserted itself in a measurable way.
This does not invalidate the case for AI. It does, however, introduce a degree of uncertainty into how its impact should be understood. If a general purpose technology is already widely used, but its effects are not yet visible in output or performance, then the relationship between use and value may be more complicated than first assumed.
Limits of Measurement
At this point, a reasonable objection begins to surface. If AI is improving how developers work, perhaps the effects are simply not visible in the measures being used. Repository counts, after all, capture volume, not necessarily value.
Indeed, software productivity has always been difficult to define. Unlike manufacturing, where output can be counted in units, software development produces systems whose impact is often indirect and long term. A single well-designed platform can replace multiple smaller applications. A more efficient architecture can reduce maintenance costs for years. Improvements in reliability, performance or usability may matter more than an increase in the number of projects created.
From this perspective, AI may already be delivering meaningful gains. Developers may be able to explore more ideas, iterate more quickly within existing systems, or tackle problems that were previously out of reach. These changes would not necessarily register as a sharp increase in the number of repositories or lines of code.
If AI reduces the time required for implementation, but increases the need for verification and oversight, then productivity gains may be partially offset by new forms of work.
There is also the question of where effort is being applied. If AI reduces the time required for initial implementation, but increases the need for verification, integration and oversight, then productivity gains may be partially offset by new forms of work. The result is a shift in the composition of labour rather than a simple increase in output.
Even so, measurement problems can only explain so much. If productivity gains were both large and widely distributed, it is reasonable to expect that at least some signal would begin to appear across multiple indicators, whether in output, employment, or company performance. The absence of a clear inflection point does not disprove the presence of improvement, but it does suggest that any gains are either modest, uneven, or still in the process of emerging.
The Original Productivity Paradox
This apparent gap between rapid technological adoption and slower measurable impact is not without precedent. In the late twentieth century, the spread of information technology produced a similar pattern. Computers were becoming ubiquitous across offices and industries, yet productivity statistics showed little immediate improvement.
The economist Robert Solow captured the puzzle in a widely cited remark: the computer age could be seen everywhere except in the productivity data. At the time, this was taken by some as evidence that the benefits of computing had been overstated. In retrospect, the conclusion was premature.
What followed was not a sudden surge, but a gradual reconfiguration of how work was organised. Businesses adapted their processes, restructured teams, and developed new ways of using digital tools. Over time, these changes accumulated, and productivity gains began to appear more clearly, particularly in sectors that were able to reorganise most effectively around the new technology.
A New Type of Puzzle
There are reasons to think AI may follow a similar trajectory. As with earlier general purpose technologies, its impact depends not only on the capabilities of the tools themselves, but on how they are integrated into broader systems. Changes to workflows, management practices and organisational structures tend to lag behind changes in tooling.
At the same time, the comparison has limits. Software development is already a highly digital, highly optimised domain. The marginal gains from additional tooling may be smaller than those achieved during earlier phases of technological adoption. It is also possible that AI introduces new forms of complexity even as it reduces others, slowing the translation of capability into measurable output.
The historical analogy does not resolve the question, but it reframes it. The absence of an immediate productivity surge is not unusual. What matters is whether the conditions for such a surge are being created, and how long it may take for them to become visible.
Increasing Productivity or Reshaping Labour?
One possible explanation lies in how these tools are actually used in practice. The Stack Overflow 2025 survey data reveals a persistent sense of friction among developers using AI tools. A majority report encountering outputs that are close to correct but require further adjustment, while many note that debugging AI generated code can take additional time.
Challenges with AI tools in development
Perhaps this explains why the same survey suggested positive sentiment for AI tools has decreased in 2025 from more than 70% in 2023 and 2024 to just 60% this year.
Moreover, the 2025 data suggests more developers actively distrust the accuracy of AI tools (46%) than trust it (33%), and only a fraction (3%) report "highly trusting" the output.
Trust in AI tools
These are not trivial issues. In complex systems, small inaccuracies can propagate quickly, and the cost of verification can outweigh the initial gains from generation. In such cases, AI does not remove work so much as reshape it, shifting effort from creation to review, correction and validation.
There are other possibilities. The effects of new technologies often appear with a lag, particularly when they require changes to workflows, team structures or organisational processes. It is also possible that current measures of output do not fully capture the kinds of gains AI is delivering, especially if those gains are concentrated in speed, flexibility or accessibility rather than volume.
For now, however, the most striking feature of the data is not transformation, but divergence. Adoption is moving quickly. Measured outcomes are moving more slowly.
New Coordination Costs
A second explanation lies not in the tools themselves, but in the systems in which they are used. Software development is rarely an individual activity. It is a coordinated process involving teams, codebases, dependencies and review structures.
In such environments, local improvements do not automatically translate into system level gains. If individual developers can produce code more quickly, but that code still needs to be reviewed, integrated and maintained, then the overall pace of development may remain constrained by those downstream processes.
AI may even increase coordination costs in some cases. More code generated more quickly can create additional review overhead. Systems may become more complex as new components are introduced. Teams may need to spend more time validating outputs that are plausible but not fully reliable.
This dynamic is familiar from other domains. Gains in one part of a system often expose bottlenecks elsewhere. In software, those bottlenecks may lie in integration, testing and governance rather than generation.
Shifting Roles: Creation to Supervision
A related shift can be seen in the changing role of the developer. If AI tools are increasingly responsible for generating first drafts of code, the human task moves toward supervision, correction and judgement.
This is a different kind of work. It places greater emphasis on understanding system level behaviour, identifying subtle errors, making decisions about trade-offs and priorities.
It may also be more cognitively demanding. Evaluating an AI generated solution requires not only domain knowledge, but also an awareness of where the system is likely to fail. This helps to explain why more experienced developers tend to express greater caution in their use of these tools, as reflected in survey data. Their expertise allows them to see both the strengths and the limitations more clearly. In this model, AI does not reduce the need for skill. It changes where that skill is applied.
AI might enable new features and ways of interacting with software, but does not necessarily reduce the resources required to build and maintain those systems in the short term.
Implications for the Software Economy
If these patterns hold, even tentatively, they suggest a different trajectory for AI in software than the one often assumed.
For companies, the immediate effect may be less about cost reduction and more about capability expansion. AI enables new features, new products and new ways of interacting with software, but does not necessarily reduce the resources required to build and maintain those systems in the short term.
For developers, the shift may be toward roles that combine technical expertise with oversight and evaluation. The ability to work effectively with AI systems, to guide them and to assess their outputs, may become as important as the ability to write code directly.
For the industry as a whole, the expectation of rapid, measurable productivity gains may need to be tempered. The impact of AI may be significant, but it is likely to be uneven, gradual and shaped by the structure of the systems into which it is introduced.
An Uncertain Transformation
Artificial intelligence is already changing software development. That much is clear from the speed of its adoption and the scale of investment behind it. What remains unclear is how quickly those changes will translate into measurable outcomes.
The current evidence suggests a more gradual process. Tools are improving and spreading, but their effects are mediated by existing workflows, organisational structures and the inherent complexity of software systems. Productivity gains, where they exist, may take time to accumulate and may not appear in the forms that are easiest to measure. For now, the most striking feature of the data is the gap between expectation and observation. AI is everywhere in software development. Its impact, however, is still emerging.



