Recent insights from Google’s Threat Intelligence Group reveal that state-sponsored cyber actors have integrated Google’s Gemini AI across nearly every step of the cyber attack chain. The findings illustrate how some of the world’s most sophisticated threat groups have found value in Gemini for tasks ranging from reconnaissance to malware development. As cybersecurity defenses adapt, the balance of power between attackers and defenders continues to fluctuate, raising new questions about AI’s evolving role in both offensive and defensive digital operations. While some analysts argue that AI proliferation represents just an incremental shift, those monitoring the global cyber landscape underscore its growing impact.
Earlier analyses highlighted the use of AI tools in cyber operations, but focused on more rudimentary deployments and often manual use. In contrast, the new report discusses the increasing automation and sophistication that current AI models, such as Gemini and Anthropic’s Claude, provide. These tools now offer greater autonomy, with capabilities expanding noticeably each year. While free, open-source models previously trailed behind featured frontier AI solutions, the capability gap has narrowed, enabling more actors—including smaller or less-resourced organizations—to access advanced cyber functionalities within a short timeframe.
How Are State-Sponsored Groups Using Gemini?
State groups from China, Russia, Iran, and North Korea have reportedly used Gemini for diverse activities, according to Google’s research. The AI tool has been employed to automate data gathering, develop malware, and even generate fake news content or online personas as part of wider information operations. For instance, North Korean actors used Gemini to obtain intelligence about roles and salaries in the defense sector, and Iranian groups leveraged the model for more effective reconnaissance. Google’s reporting indicates that Gemini is mainly one utility among many in these campaigns, used specifically to enhance efficiency in routine or technical tasks.
Has Gemini Fully Automated Cyber Attacks?
Despite Gemini’s growing utility, there are no confirmed cases of state actors relying on it or similar AI to run predominantly automated cyber attacks end-to-end. Google’s John Hultquist points out that, for now, many threat groups remain in a trial-and-error phase:
“Nobody’s got everything completely worked out. They’re all trying to figure this out and that goes for attacks on AI, too.”
Some operations still require significant human input, as fully autonomous attacks may increase the likelihood of detection, reducing their appeal for espionage-focused actors.
Will Frontier AI Tilt the Balance Between Attackers and Defenders?
The spread of advanced AI models raises concerns about their dual-use nature. While companies like Anthropic and XBOW focus on developing robust AI-driven cybersecurity defenses, similar features could be exploited for offensive purposes by adversarial groups or state actors. The UK AI Security Institute notes that open-source models are quickly matching the capabilities of leading-edge AIs, which could further accelerate the pace and complexity of cyber activity worldwide. Hultquist acknowledges a nuanced risk landscape, noting:
“What’s so interesting about this capability is it’s going to have an effect across the entire intrusion cycle.”
AI’s integration into cyber operations marks an adaptive phase rather than an overnight overhaul of hacking strategies. Both offensive and defensive uses of AI are set to become more sophisticated as capabilities develop. Security professionals and organizations should closely monitor how threat actors automate reconnaissance, malware creation, or information campaigns with tools like Gemini and Claude. Analysis of recent research suggests that as AI tools become easier to access and more autonomous, the threat surface will broaden, potentially leveling the field for smaller players and increasing the pace of cyber incidents. Staying informed about how these tools are actually applied—rather than feared in the abstract—will help organizations anticipate risks and focus resources more effectively.
