AI for Business

Musk's Grok AI Fails Fact-Check, Fuels Iran War Disinformation on X

A simple test by disinformation researcher Tal Hagin revealed a core failure in Elon Musk's vision for X. When Hagin asked the platform's integrated AI chatbot, Grok, to verify a video about...

Share:
Musk's Grok AI Fails Fact-Check, Fuels Iran War Disinformation on X

A simple test by disinformation researcher Tal Hagin revealed a core failure in Elon Musk's vision for X. When Hagin asked the platform's integrated AI chatbot, Grok, to verify a video about Iranian missiles striking Tel Aviv, the tool repeatedly gave incorrect details about the clip's date and location. It then attempted to substantiate its false claims by sharing an AI-generated image. "Now Grok is replying with AI slop of destruction," Hagin noted.

This incident highlights a central problem on X since the conflict between the US, Israel, and Iran began on February 28. The platform has been inundated with fabricated material, a situation now intensified by a surge of convincing AI-generated images and videos. These fakes are circulated by paid, verified accounts and Iranian officials aiming to exaggerate battlefield results. One AI image, falsely showing a downed US B-2 bomber, garnered over a million views before removal.

According to researchers at the Institute of Strategic Dialogue, Iranian propaganda networks are also using AI to create and spread antisemitic content. One widely-viewed fake video manipulated to depict former President Donald Trump in a compromising scenario was seen 6.8 million times.

X recently stated it would temporarily strip monetization from verified accounts posting unlabeled AI war footage, but has not disclosed any enforcement actions. The policy does not address the broader ecosystem where AI tools, now sophisticated enough to deceive professionals, create consequences-free fabrications. "I see the proliferation of AI-based fake news pushing us over the edge of a fact-based world unless we enact change now," Hagin told WIRED.

Meanwhile, non-AI falsehoods persist. Following a deadly strike on a school in Minab, Iran, pro-Trump accounts repurposed unrelated conflict footage to falsely blame Iran, despite evidence pointing to a US Tomahawk missile hitting a nearby base. As analyst Isis Blachez of NewsGuard observes, the realism of new AI visuals, combined with unreliable detection tools, makes users increasingly vulnerable to accepting fabrication as proof.

Source: Wired

Ready to Modernize Your Business?

Get your AI automation roadmap in minutes, not months.

Analyze Your Workflows →