OpenAI Disrupts Chinese and Global Actors Using ChatGPT for Surveillance and Influence Operations
Background
OpenAI has begun publishing threat reports that highlight how state‑affiliated actors and criminal networks are leveraging large language models for malicious purposes. The latest report, released in the company’s blog, summarizes a range of activities detected over the previous quarter.
Tools and Targets in China
The company disclosed that a now‑banned account originating in China used ChatGPT to help draft promotional materials and project plans for a social‑media listening tool described as a “probe.” This probe could crawl platforms such as X, Facebook, Instagram, Reddit, TikTok and YouTube to locate content defined by the operator as political, ethnic or religious. OpenAI noted that it cannot independently verify whether the tool was employed by a Chinese government entity.
In a separate case, OpenAI blocked an account that was using the chatbot to develop a proposal for a “High‑Risk Uyghur‑Related Inflow Warning Model.” The model was intended to aid in tracking the movements of individuals deemed “Uyghur‑related.” Both incidents illustrate how the technology can be repurposed for targeted surveillance.
Global Threat Landscape
Beyond China, OpenAI identified Russian, Korean and Chinese‑speaking developers who were using ChatGPT to refine malware. The company also uncovered entire networks operating in Cambodia, Myanmar and Nigeria that employed the chatbot to assist in creating scams. OpenAI’s internal estimates indicate that ChatGPT is being used to detect scams three times as often as it is used to create them.
During the summer, OpenAI disrupted operations in Iran, Russia and China that leveraged ChatGPT to generate posts, comments and other content designed to drive engagement and sow division as part of coordinated online influence campaigns. The AI‑generated material was distributed across multiple social‑media platforms both within the originating nations and internationally.
OpenAI’s Response
OpenAI’s threat reports, first published in February 2024, aim to raise awareness of how large language models can be weaponized for debugging malicious code, developing phishing scams and other illicit activities. The latest roundup serves as a summary of notable threats and the accounts that have been banned as a result of violating OpenAI’s use‑policy.
By actively monitoring and disabling accounts that exploit its technology for surveillance, malware refinement, or disinformation, OpenAI seeks to limit the misuse of its models while continuing to provide tools for legitimate users.
Implications
The disclosures underscore the dual‑use nature of advanced AI systems. While the technology offers powerful capabilities for research and productivity, it also presents opportunities for authoritarian surveillance and coordinated misinformation efforts. OpenAI’s proactive stance highlights the challenges tech companies face in balancing openness with responsibility.
Used: News Factory APP - news discovery and automation - ChatGPT for Business