AI-generated responses are undermining crowdsourced research studies


Some people participating in online research projects use AI to save time
Daniele d’Andretti / Unsplash
Online questionnaires are overwhelmed by responses generated by AI – potentially polluting a vital data source for scientists.
Platforms like the prolific participants of the participants are small sums to answer questions asked by researchers. They are popular among academics as an easy way to bring together participants for behavioral studies.
Anne-Marie Nussberger and her colleagues from the Max Planck Institute for Human Development in Berlin, Germany, decided to study the frequency to which respondents use artificial intelligence after noticing examples in their own work. “The incidence rates we observe were really shocking,” she said.
They found that 45% of the participants who were asked a single open question on copied and prolific glued content in the box – an indication, they believe, that people asked an AI chatbot to save time.
A more in -depth survey of the content of the responses suggested more obvious to the use of AI, as a “too verbose” or “distinctly non -human” language. “According to the data we collected at the start of this year, it seems that a substantial proportion of studies is contaminated,” she says.
In a later study using Prolific, researchers added traps designed for puzzles for those who use chatbots. Two recaters – small tests based on patterns designed to distinguish humans from robots – attracted 0.2% of participants. A more advanced recaptcha, which used information on the past activity of users as well as current behavior, eliminated 2.7% of additional participants. A question in the text that was invisible to humans but readable to the robots asking them to include the word “hazelnut” in their response, captured an additional 1.6%, while preventing any copy and gluing identified 4.7% of people.
“What we have to do is not completely wary of online research, but react and react,” explains Nussberger. This is the responsibility of researchers, who should process the responses with more suspicion and take countermeasures to stop the AI compatible behavior, she said. “But really, I also think that a lot of responsibilities are on platforms. They must answer and take this problem very seriously.”
Prolific did not respond to New scientistComment request.
“The integrity of online behavioral research was already challenged by participants in survey sites that distortion themselves or using boots to earn money or good, not to mention the validity of self -detached responses to understand complex psychology and human behavior,” explains Matt Hodgkinson, unwavering consultant in research ethics. “Researchers must either collectively collect means to distance human involvement from a distance or return to the old -fashioned approach to face to face.”
Subjects:




