OpenAI reported that it has banned a China‑originated account that used ChatGPT to design a social‑media listening “probe” capable of crawling major platforms for politically, ethnically or religiously defined content. The company also blocked an account developing a “High‑Risk Uyghur‑Related Inflow Warning Model” for tracking individuals. These actions are part of a broader effort that uncovered Russian, Korean and Chinese developers refining malware, and networks in Cambodia, Myanmar and Nigeria creating scams with the AI. OpenAI estimates its model detects scams three times more often than it creates them, and it has disrupted influence campaigns in Iran, Russia and China.
Read more →