An incident involving the use of ChatGPT by a Chinese law enforcement official has drawn attention to a wider effort targeting critics of China worldwide. The reports submitted for review exposed extensive campaigns designed to silence dissenters both in China and abroad. According to OpenAI, the account not only used ChatGPT for editorial purposes but also for planning influence campaigns and evaluating propaganda efforts. Observers outside the country note that the development underscores the growing intersection between AI technologies and nation-state operations seeking to control information globally.
News from recent months have cited rising global concerns over the use of advanced AI models for coordinated cyber campaigns. New reports reflect how actors experiment with AI to amplify existing operations rather than fully automate attacks. Other international investigations have highlighted the blending of multiple AI tools, both domestic and foreign, revealing how rapidly AI-assisted information strategies are evolving. The trend shows a broader embrace of these technologies by various state-backed and criminal organizations.
How Did ChatGPT Feature in the Disclosed Operations?
OpenAI said that a single account associated with a Chinese law enforcement office submitted internal reports on “cyber special operations” to ChatGPT for review. These documents described ongoing campaigns to suppress opposition, including content generation, social media manipulation, and attempts to discredit or threaten critics. One notable effort included planning a propaganda campaign against Japanese Prime Minister Sanae Takaichi, which the model refused to participate in; however, subsequent submissions indicated continued activity. OpenAI commented,
“The activity concerned a single account that regularly used ChatGPT to review and edit reports on ‘cyber special operations.’”
What Scale and Methods Did the Campaigns Employ?
According to OpenAI’s report, the operations relied on hundreds of workers and thousands of fake accounts across social platforms, while leveraging both international and local AI models such as DeepSeek. The campaigns extended to forging documents, filing misleading complaints, impersonating U.S. officials, and targeting analysts and state officials through personalized emails. The actors also attempted to gather information using VPNs and prompts in Simplified Chinese—a script closely associated with mainland China—underscoring the deliberate multi-pronged approach. OpenAI noted,
“Threat actors may use different AI models at various points in their operational workflow.”
Did AI Tools Enable Offensive Cyber Attacks?
OpenAI found no evidence of the model being used for automated hacking, focusing instead on its support role in spreading information and targeting individuals with disinformation, scam attempts, and coordination of harassment campaigns. Attempts to exploit ChatGPT included drafting emails, requesting technical installation steps for software like FaceFusion, and conducting reconnaissance on U.S. forums and offices. These activities were matched with public data sources and primarily served to enable influence rather than direct attacks. The report also highlighted other campaigns, including those linked to Russia-aligned groups and romance scams, that demonstrated similar limited yet impactful uses of AI tools.
The details provided by OpenAI underscore how AI, while not generating direct cyber intrusions, amplifies existing social engineering and propaganda efforts. Combining multiple AI platforms, actors can adapt quickly, refining their strategies as new technologies surface. For readers, this case exemplifies the necessity of skepticism regarding the origins of online content and the potential for AI to drive sophisticated, cross-platform campaigns. Awareness of tactics such as mass posting, impersonation, phishing, and exploitation of technical documentation is crucial for individuals and organizations navigating a digital environment where both transparency and security face new challenges.
