‘Inoculation’ helps people spot political deepfakes, study finds

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

According to a study my colleagues and I conducted, informing people about political deepfakes through textual information and interactive games improves people’s ability to spot AI-generated videos and audio that falsely depict politicians.

Although researchers have primarily focused on advancing deepfake detection technologies, there is also a need for approaches that address the potential audiences of political deepfakes. Deepfakes are becoming increasingly difficult to identify, verify and combat as artificial intelligence technology improves.

Is it possible to vaccinate the public to detect deepfakes, thereby increasing their awareness before exposure? My recent research with Sang Jung Kim and Alex Scott, media studies researchers at the University of Iowa’s Visual Media Lab, found that inoculation messages can help people recognize deepfakes and even make them more willing to debunk them.

Inoculation theory proposes that psychological inoculation – analogous to medical vaccination – can immunize people against persuasive attacks. The idea is that by telling people how deepfakes work, they are prepared to recognize them when they encounter them.

In our experiment, we exposed a third of the participants to a passive inoculation: traditional text warning messages about the threat and characteristics of deepfakes. We exposed another third to active inoculation: an interactive game that challenged participants to identify deepfakes. The remaining third have not received any vaccinations.

Participants then randomly saw either a deepfake video showing Joe Biden making pro-abortion rights statements or a deepfake video featuring Donald Trump making anti-abortion statements. We found that both types of inoculation were effective in reducing the credibility participants gave to deepfakes, while increasing people’s awareness and intention to learn more about them.

Why it matters

Deepfakes pose a serious threat to democracy because they use AI to create very realistic fake audio and video files. These deepfakes can make it appear as if politicians are saying things they never said, which can damage public trust and lead people to believe false information. For example, some voters in New Hampshire received a phone call that sounded like Joe Biden’s, telling them not to vote in the state’s primary election.

As AI technology becomes more and more common, it is especially important to find ways to reduce the harmful effects of deepfakes. Recent research shows that labeling deepfakes as fact-checking statements is often not very effective, especially in political contexts. People tend to accept or reject fact-checks based on their existing political beliefs. Additionally, false information often spreads faster than accurate information, making fact-checking too slow to completely stop the impact of false information.

As a result, researchers are increasingly calling for new ways to prepare people to resist misinformation in advance. Our research helps develop more effective strategies to help people resist AI-generated misinformation.

What other research is underway

Most research on inoculating against misinformation relies on passive media literacy approaches that primarily deliver text messages. However, more recent studies show that active inoculation may be more effective. For example, online games involving active participation have been shown to help people resist violent extremist messages.

Additionally, most previous research has focused on protecting people from textual misinformation. Our study instead examines vaccination against multimodal disinformation, such as deepfakes that combine video, audio and images. Although we expected active inoculation to work better for this type of misinformation, our results show that both passive and active inoculation can help people deal with the threat of deepfakes.

What’s next

Our research shows that inoculation messages can help people recognize and resist deepfakes, but it remains unclear whether these effects persist over time. In future studies, we plan to examine the long-term effect of inoculation messages.

We also aim to determine whether vaccination works in other areas beyond politics, including health. For example, how would people react if a deepfake showed a fake doctor spreading health misinformation? Would past vaccination messages help people question and resist such content?

The Research Brief is a brief overview of interesting academic work.

This article is republished from The Conversation, an independent, nonprofit news organization that brings you trusted facts and analysis to help you make sense of our complex world. It was written by: Bingbing Zhang, University of Iowa

Learn more:

Bingbing Zhang receives funding from the University of Iowa School of Journalism and Mass Communication.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button