AI-generated ‘poverty porn’ fake images being used by aid agencies | Global development

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

AI-generated images showing extreme poverty, children and survivors of sexual violence are flooding photo sites and are increasingly used by major health NGOs, according to medical professionals around the world who have expressed concern about a new era of “poverty porn.”

“People everywhere are using it,” said Noah Arnold, who works at Fairpicture, a Switzerland-based organization dedicated to promoting ethical imagery in global development. “Some are actively using AI imaging, and others we know are at least experimenting.”

Arsenii Alenichev, a researcher at the Institute of Tropical Medicine in Antwerp who studies the production of global health images, said: “The images reproduce the visual grammar of poverty – children with empty plates, cracked earth, stereotypical visuals. »

Alenichev collected more than 100 AI-generated images of extreme poverty used by individuals or NGOs in social media campaigns against hunger or sexual violence. The images he shared with the Guardian show scenes that are exaggerated and perpetuate stereotypes: children huddled together in muddy water; an African girl dressed in a wedding dress with a tear staining her cheek. In a commentary published Thursday in the Lancet Global Health, he says these images constitute “poverty porn 2.0.”

While it’s difficult to quantify the prevalence of AI-generated images, Alenichev and others say their use is increasing, driven by concerns about consent and cost. Arnold said cuts in US funding to NGO budgets had made the situation worse.

“It’s clear that various organizations are starting to consider computer-generated images rather than real photographs, because it’s cheap and there’s no need to bother with consent and everything else,” Alenichev said.

AI-generated images of extreme poverty are now appearing by the dozens on popular photo sites, including Adobe Stock Photos and Freepik, in response to queries like “poverty.” Many bear captions such as “Photorealistic child in refugee camp”; “Asian children swim in a river full of trash”; and “A white Caucasian volunteer gives medical consultations to young black children in an African village.” Adobe sells licenses for the last two photos on this list for around £60.

“They are so racialized. They should never even allow this information to be published because it seems like the worst stereotypes about Africa, or India, or whatever,” Alenichev said.

Joaquín Abela, CEO of Freepik, said the responsibility for the use of such extreme images lies with media consumers, not platforms like his. AI photos, it said, are generated by the platform’s global community of users, who can receive licensing fees when Freepik customers choose to purchase their images.

Freepik has tried to reduce the bias seen in other parts of its photo library, it said, by “injecting diversity” and trying to ensure gender balance in photos of lawyers and CEOs hosted on the site.

But, he said, his platform couldn’t do much. “It’s like trying to dry up the ocean. We’re trying, but the reality is that if customers around the world want images a certain way, there’s absolutely nothing anyone can do.”

A screenshot showing AI-generated images of “poverty” on a stock photo site. Images like these have raised concerns about biased images and stereotyping. Illustration: Freepik

In the past, large charities have used AI-generated images as part of their global health communications strategies. In 2023, the Dutch branch of the British charity Plan International released a video campaign against child marriage containing AI-generated images of a girl with a black eye, an older man and a pregnant teenage girl.

Last year, the UN released a video on YouTube featuring AI-generated “reenactments” of conflict sexual violence, which included the AI-generated testimony of a Burundian woman describing being raped by three men and left to die in 1993 during the country’s civil war. The video was removed after the Guardian contacted the UN for comment.

A spokesperson for UN peacekeeping operations said: “The video in question, which was made over a year ago using a rapidly evolving tool, has been removed because we believe it shows inappropriate use of AI and may pose risks to information integrity, by mixing real images and artificially generated near-real content.

“The UN remains steadfast in its commitment to supporting victims of conflict-related sexual violence, including through innovation and creative advocacy. »

Arnold said the growing use of these AI images comes after years of debate in the industry around ethical imagery and dignified storytelling about poverty and violence. “It is assumed that it is easier to take ready-made AI visuals provided without consent, because they are not real people.”

Kate Kardol, an NGO communications consultant, said the images frightened her and recalled previous debates over the use of “poverty porn” in the sector.

“It saddens me that the fight for more ethical representation of people in poverty has now extended to the unreal,” she said.

Generative AI tools have long been observed to reproduce – and sometimes exaggerate – broader societal biases. The proliferation of biased images in global health communications could make the problem worse, Alenichev said, because the images could leak onto the broader internet and be used to train the next generation of AI models, a process that has been shown to amplify bias.

A spokesperson for Plan International said the NGO had, since this year: “adopted guidelines advising against the use of AI to depict individual children”, and said the 2023 campaign had used AI-generated images to safeguard “the privacy and dignity of real girls”.

Adobe declined to comment.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button