The landscape of software development is rapidly shifting as more professionals turn to artificial intelligence for everyday tasks, yet skepticism persists about its dependability. According to Google’s latest DORA: State of A.I.-assisted Software Development report, released in September 2025, 90 percent of tech professionals now use AI technology in their workflows—a notable increase over last year. Although this surge suggests a willingness to experiment, developers continue to view AI as a helpful resource rather than a trusted collaborator. Many teams recognize AI’s power to speed up their work, yet concerns remain that the technology may not always produce reliable results. Even as businesses seek efficiency, individual software engineers are weighing the benefits against the potential risks. With AI tools such as those from Google Cloud and practices outlined in the newly introduced DORA AI Capabilities Model, organizations are striving to bridge the gap between productivity and trust.
Earlier coverage on AI adoption in software has emphasized the rise in efficiency and enthusiasm among developers but did not report such a high adoption rate or so significant a jump in usage within a single year. Past analyses largely focused on AI’s power to assist with routine coding and bug fixes; they did not highlight the persistence of trust issues or detail developers’ cautious approach to AI outputs. New data reveals a more complex scenario, where increased AI use has not translated into a proportional rise in confidence, pointing toward a growing dichotomy between adoption and trust that has emerged only recently.
What Drives Widespread AI Adoption in Software Work?
Major factors driving the surge in AI utilization among software professionals include the promise of higher productivity and improved performance. The DORA report reveals that 85 percent of respondents feel more productive due to AI, yet 41 percent rate these gains as only slight, with few experiencing setbacks. Developers typically dedicate about two hours each workday to these tools, streamlining code writing, automated testing, and code review processes. Google Cloud’s focus on integrating AI reflects board-level priorities across organizations, seeking gains in both speed and quality of output.
How Do Developers View Trust and Code Quality?
Even with widespread use, developer trust in AI tools remains low, with just 24 percent expressing strong confidence in their reliability. Concerns about code quality and software stability persist, especially as AI-generated code often becomes part of core production systems. As Nathen Harvey, lead researcher at Google Cloud, commented,
“Even with the help of A.I., teams will still need ways to get fast feedback on the code changes that are being made.”
There is a clear understanding that while AI can boost throughput, human oversight remains critical to maintaining code integrity and readability.
Can AI Tools Alleviate Workplace Pressure and Burnout?
Despite productivity gains, AI tools have not addressed fundamental workplace issues like developer burnout or organizational friction. These challenges often relate more to company culture and leadership expectations than to the technology itself. Some developers even worry that increased efficiency from AI may fuel greater demands. This sentiment is echoed by Harvey:
“If leaders start expecting more because A.I. makes developers faster, it could even add to the pressure.”
Addressing these systemic issues requires more than adopting new tools; it calls for changes in leadership, process, and culture.
Adoption of AI in software development has accelerated, but growing reliance does not automatically inspire trust. Developers use AI extensively for efficiency yet hesitate to rely on it independently, treating its output with skepticism comparable to that reserved for community-sourced resources. High-functioning teams and organizations are beginning to deploy frameworks such as the DORA AI Capabilities Model to manage the challenges arising from the trust gap, focusing on areas like effective communication, user-centric practices, and rapid feedback cycles. Those considering implementing AI solutions should evaluate not just technical integration but also the readiness of their teams and organizational culture to address skepticism and foster confidence. Trust has emerged as the critical factor dictating how far AI can support—or limit—the progress of software engineering in the years ahead.