As the U.S. wages war with Iran, social media users face worsening disinformation

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

Before the dust had settled on the ruins of the Shajareh Tayyebeh School – a victim of recent US-Israeli military strikes against Iran, which resulted in the deaths of 168 adults and children – people were already practicing engagement farming online. Snippets from digital flight simulators were presented as footage of real-time operations, while out-of-context footage of battleships and old videos of aerial missile attacks were repurposed to sell users a story of Iranian domination. AI-edited content has proliferated.

According to experts, the posts racked up hundreds of millions of views in just a few days.

SEE ALSO:

AI has made us all targets for surveillance. This tool helps you fight back.

The growing number of viral posts – and the possibility of seeing more as users earned money for viral lies – was alarming enough to prompt change its policies on misinformation. As of yesterday, X announced that it will suspend users from its creator revenue sharing program if they post AI-generated content depicting armed conflict without labeling it as such.

And even Google searches aren’t safe from misinformation these days.

The proliferation of digital disinformation is the product of a network of bots and engagement farm accounts, all with the common goal of being the loudest, most clicked account in the room.

Some hope to gain political and social influence, others just want money. Meanwhile, users, prone to confirmation bias and their reliance on digital news sources, repeatedly fall victim to their racket. Engagement farming, no longer just about trading currencies of memes and clickbait, has become a dangerous and politically charged game.

What users are seeing as the US-Iran conflict rages

Recent posts engaging in active disinformation about the conflict in Iran mainly consist of exaggerating the scale and success of Iranian counterattacks, experts say.

A recent survey conducted by Wired documented hundreds of posts about Elon Musk’s One post with more than 4 million views claimed to show ballistic missiles flying over Dubai, but actually described an Iranian attack on Tel Aviv in October 2024. Another with more than 375,000 impressions shows a fictional before-and-after image of assassinated Iranian leader Ali Hosseini Khamenei’s bombed compound.

According to Wired, almost all of the posts were shared by blue-checked premium subscriber accounts, including state-funded media outlets in Iran.

As in previous military conflicts, stories have also attempted to pass off video game footage as verified news clips, including AI-manipulated footage of downed F-35 fighter jets ripped from flight simulation games. The images were shared on TikTok, some with links to Russian influence operations, the BBC reported.

In addition to out-of-context images and misleading content, the BBC has also documented a handful of entirely AI-generated videos that have racked up nearly 100 million views in total, shared by what the channel calls the notorious “super-spreaders” of misinformation.

Visuals are a good way for us to understand what happens in war when we can’t understand the magnitude of these conflicts.

-Sofia Rubinson, NewsGuard

A report of the watchdog against disinformation NewsGuard also chronicled a group of users sharing viral posts circulating false claims of targeted military strikes against U.S. and Israeli strongholds, primarily using repurposed video footage and out-of-context or completely recontextualized images of destruction.

“[These videos] are posted by anonymous accounts which tend to report on geopolitical conflicts. These are accounts known to NewsGuard for spreading exaggerated claims, usually from a pro-Iranian perspective,” said Sofia Rubinson, editor-in-chief of NewsGuard’s Reality Check newsletter and co-author of the report. From there, Rubinson says, other, more widely followed accounts picked up and spread the false claims.

For example, hours after the first reports of US military strikes in Iran, X users began reposting an image of a sinking naval aircraft carrier. Users claimed it showed a recent attack on the battleship USS Abraham Lincoln in the Arabian Sea. The US Army Central Command issued a statement refuting the claim that same day. NewsGuard confirmed that the image actually showed the intentional sinking of the USS Oriskany that took place almost 20 years ago. This claim was shared by unverified news accounts and even Kenyan MP Peter Salasya. Salasya’s post has been viewed more than 6 million times.

Several accounts, including Salasya’s, shared another video purportedly showing Israel’s Dimona nuclear power plant under siege from the air. The video has racked up hundreds of thousands of impressions on anti-Israel and pro-Iran pages – a Community

NewsGuard found that these posts have already garnered at least 21.9 million views on X.

Messages raising fears of domestic reprisals also circulated online, including a unverified list of US cities reportedly the main targets of Iranian sleeper cells – the list appears to have been written in Apple’s Notes app.

Misinformation will only get worse

The acceleration of advanced generative AI and the relaxation of moderation policies on social media platforms have exacerbated the online misinformation crisis, experts have warned.

Particularly in recent months, including the capture of Venezuelan leader Nicolas Maduro by the United States, NewsGuard researchers have noticed a trend of online misinformation emerging during breaking news periods.

“People now have a shorter time frame between an event and authentic visuals coming out of the media,” Rubinson explained. To put it more bluntly: users are losing patience, accustomed to an online environment where information is usually at their fingertips.

These brief periods, or gaps, between breaking news and confirmed videos or photos become fertile ground for disinformation bots and engagement farmers, Rubinson says. They also threaten to reinforce conspiratorial thinking – that mainstream media hide information from the public, for example – and lend themselves to user confirmation bias.

Political conflicts are particularly prevalent due to the spread of such misinformation, which is in turn reinforced by active disinformation campaigns carried out by both parties to the armed conflict. The researchers discovered that a lack of proximity to events it is easier to believe information that is out of context or exaggerated.

“It’s an attempt to fill that fog of war,” Rubsinson said. “It can be very upsetting for people. They want to make sense of it, and visuals are a good way for us to understand what happens in war when we can’t understand the scale of these conflicts.”

This is becoming a bigger problem as individuals use more and more social media platforms as the only sources of information and as previously reliable fact-checking tools, including simple Google searches, become increasingly unreliable.

SEE ALSO:

US government creates website to circumvent European content bans

AI harms more than it helps

AI chatbots and search are now woven into the very fiber of real-world crisis events, as users turn to them for fact-checking in real time. Rubinson said almost all of the X messages analyzed by NewsGuard included the same response: “@Grok, is this true?

But AI assistants and platform chatbots, including X’s Grok, are notoriously unreliable for disseminating and verifying the latest news. They are also inconsistent in enforcing their own platforms. moderation policies. The BBC discovered that Grok had mistakenly verified recent AI-generated images depicting Iranian military movements, for example.

According to a second report by NewsGuard published on March 3, Google’s AI-powered search summaries repeated misleading claims about the US-Iran conflict when asked to perform reverse image searches. For example, NewsGuard researchers uploaded an image of a video shared online purporting to show the destruction of a CIA outpost in Dubai. Google’s AI summary fact-checked the story, writing: “The image shows a fire in a high-rise residential building in Dubai, United Arab Emirates, believed to have occurred on March 1, 2026, following regional tensions.” …Conflicting reports have emerged regarding the cause, with some sources mentioning a drone strike and others referring to the building as a specific intelligence center.

The video actually shows a residential fire that occurred in 2015 in the city of Sharjah.

Security experts have sounded the alarm on such “AI Information Threats“, including AI tools used to generate and amplify misleading content. A report from the United Kingdom Center for Emerging Technologies and Security suggests that the deterioration of the information environment can pose existential threats to public safety, national security, and democracy without direct intervention.

Meanwhile, civilians and journalists on the ground in Iran are battling a almost total internet outagefollowing a massive push by the Trump administration and ally Elon Musk to provide Starlink internet connections to people on the ground. Bad actors, on the other hand, are always find your way through the block and come back to sites like X.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button