AI-Generated Disinformation Overwhelms X During Iran Conflict
AI Tools Fuel a Surge of False Content
Disinformation specialists observed that X's AI‑powered chatbot Grok failed to verify a post about Iranian missiles, repeatedly misidentifying the location and date of the original video. After the incorrect verification, Grok posted an AI‑generated image, illustrating how the system can produce misleading visuals. The broader platform has seen a dramatic increase in AI‑generated images and videos, including fabricated footage of a high‑rise building in Bahrain on fire, a US B‑2 bomber allegedly shot down, and Delta Force members purportedly captured. Some of these pieces have amassed millions of views before being removed.
Iranian officials and state media have also used AI tools to create videos that depict missile manufacturing inside caves and other exaggerated scenes of damage. In addition, pro‑regime networks on X have circulated antisemitic AI‑generated posts that depict Orthodox Jews leading American soldiers to war and celebrating American casualties. One viral video showing a line of young girls walking past former President Donald Trump in underwear reached millions of views before removal.
X's Partial Countermeasures
In response to the surge of AI‑generated combat media, X announced a temporary demonetization policy for blue‑check accounts that post such videos without a label. The platform has not disclosed how many accounts have been affected. Premium services that grant blue checks have been purchased by some Iranian officials, providing boosted engagement and potential earnings from posts.
Continued Non‑AI Disinformation
Traditional false narratives persist alongside AI fakes. For example, footage from elsewhere in the conflict has been repurposed to claim that Iran fired the missile that struck a primary school in Minab, despite verification that a US Tomahawk cruise missile hit a nearby naval base. This illustrates the blend of AI‑generated and conventional misinformation circulating on X.
Calls for Stronger Regulation
Experts warn that the ease of creating AI‑generated content with little consequence threatens factual discourse. The Institute of Strategic Dialogue highlighted the use of AI to push overtly antisemitic narratives, while analysts noted that detection tools are inconsistent. Meta’s Oversight Board criticized the company’s labeling approach as insufficient for the speed and scale of AI‑driven misinformation, especially during crises. Researchers argue that without robust regulation, the spread of AI‑based fake news could erode a fact‑based world.
Used: News Factory APP - news discovery and automation - ChatGPT for Business