As artificial intelligence becomes embedded in digital systems worldwide, fresh findings from OpenAI reveal both cybercriminal and nation-state actors are increasingly relying on AI not to revolutionize their techniques, but to automate and optimize well-worn hacking and disinformation tactics. This sheds light on the evolving dual-use nature of language models like ChatGPT, where their utility serves both malicious activities and defensive countermeasures. Exchanges between adversaries and these platforms have opened up new administrative efficiencies, raising questions for future AI governance and risk assessment in cybersecurity. The company’s examination gives a snapshot not only into capabilities but also limitations, as attempts to misuse AI regularly encounter built-in safeguards. Concerns persist over AI video tools such as Sora 2, flagged by experts for their potential in manipulating multimedia content, though they remain outside the scope of this analysis.
In prior reporting on adversarial AI, many tech observers anticipated that threat actors might develop novel, AI-specific hacking methodologies. However, the current landscape remains largely focused on improving the scale and efficiency of established operations. Earlier research rarely outlined the distinct workflows and precise targeting behaviors now highlighted, particularly those emphasizing specific geopolitical interests or administrative uses within scam centers. Increased evidence of cross-platform learning, such as querying both ChatGPT and regional competitors like DeepSeek, suggests an adaptive approach and persistent experimentation by these groups.
How Are State and Criminal Actors Adapting Their Tactics?
According to OpenAI’s October threat report, both cybercriminal enterprises and state-aligned hackers are favoring AI to refine, rather than invent, attack methods. Familiar activities such as malware creation, social engineering, and intelligence collection remain central, with threat actors building AI into existing operational pipelines. The report notes,
“Repeatedly, and across different types of operations, the threat actors we banned were building AI into their existing workflows, rather than building new workflows around AI.”
These integrations have been detected across diverse geographies, with examples including China-aligned operations focused on Taiwanese semiconductor interests and U.S. academic targets, and North Korea-linked actors adopting a modular approach to tool development.
How Do Criminal Operations Use AI for Daily Operations?
Beyond targeted cyber-espionage, AI is being leveraged by online scam operations to automate fraud and handle business administration. OpenAI uncovered incidents in Myanmar and Cambodia where scam centers used ChatGPT for generating fraudulent content, organizing internal activities, and crafting convincing online personas. Some scam operators even revisited the tool for advice when engaging potential victims. This administrative application demonstrates the appeal of AI in streamlining both the technical and logistical facets of cybercrime. OpenAI researchers observed a “dual-use” pattern, highlighting that many users simultaneously seek the model’s guidance on avoiding scams and illicit activities.
How Effective Are Influence and Disinformation Campaigns?
Investigations also exposed clusters of social media accounts deploying OpenAI’s platform to disseminate pro-China narratives overseas, with tactics echoing those of known campaigns such as Spamouflage. Despite the sophistication in content generation, the impact of these efforts appeared minimal; most posts achieved little engagement outside of networks controlled by the same operators. OpenAI commented,
“Most of the posts and social media accounts received minimal or no engagements. Often the only replies to or reposts of a post generated by this network on X and Instagram were by other social media accounts controlled by the operators of this network.”
This suggests amplification strategies have yet to substantially improve with current AI tools.
Efforts to misuse general-purpose AI tools often encounter robust built-in protections. The report documented cases of Russian-speaking groups resorting to fragmented requests and developing modular code, after initial malware-related attempts were blocked. Many abuse attempts tend to exploit technical ambiguities or dual-use scenarios—such as generating benign-seeming snippets that could later be misapplied outside the platform. While the tool’s security measures prevent direct misuse within its ecosystem, the ultimate risks hinge on downstream user intent and off-platform activities.
AI’s intersection with cybersecurity continues evolving, with persistent actors exploring every edge for advantage. OpenAI’s monitoring points to a status quo where existing threats are made more efficient rather than fundamentally altered. For organizations, this raises important considerations for staff training, digital hygiene, and layered defenses, as threat actors now have access to tools that scale and obscure their operations more effectively. At the same time, language models are also being asked millions of times per month to identify and flag potential scams—often outpacing the malicious uses. This dual-use dynamic requires close oversight and constant adaptation by AI providers. Security practitioners should remain vigilant, continually educate themselves on evolving attacker techniques, and maximize the defensive value of these technologies while advocating for the strengthening of safety features. More broadly, regular cross-industry sharing of threat intelligence will prove crucial for identifying trends, closing loopholes, and establishing best practices.