AI hallucinations work both ways, study shows — using chatbots can amplify and reinforce our own delusions


There are many examples of artificial intelligence (AI) systems hallucinations and the effects of these incidents. But a new study highlights the potential dangers of the opposite: humans hallucinate with AI because it tends to affirm our delusions.
Generative AI systems, such as ChatGPT And Grokgenerate content that responds to user prompts. They do this by learning patterns from existing data that the AI has been trained on. But these AI tools also learn continuously through a feedback loop and can personalize their responses based on previous interactions with a user.
Article continues below
In the new analysis, published February 11 in the journal Philosophy & Technology, Lucie Oslerprofessor of philosophy at the University of Exeter, suggests that AI hallucinations could be more than just mistakes; these may be shared illusions created between the user and the generative AI tool.
Generative AI has already hallucinated false versions of historical events And fabricated legal citations. The launch of Google’s AI Overviews in May 2024, for example, saw people they are advised to add glue to their pizza and eat stones. Another extreme example of generative AI supporting delusional thinking occurred when a man plotted to assassinate Queen Elizabeth II with her AI chatbot “girlfriend” Sarai, a AI Companion by Replika.
Instances like this are sometimes called “AI-induced psychosis“, which Osler considers extreme examples of “inaccurate beliefs, distorted personal memories and narratives, and delusional thoughts” that can emerge through interactions between humans and AI.
In his article, Osler argues that our use of generative AI is different from our use of search engines. Distributed cognition theory provides insight into how the interactive nature of generative AI means that delusions and false beliefs can appear validated – or even amplified.
“When we regularly rely on generative AI to help us think, remember, and tell, we can experience AI hallucinations,” Osler said in a paper. statement about paper. “This can happen when AI introduces errors into the distributed cognitive process, but also when AI supports, affirms and expands on our own delusional thoughts and narratives.”
Illusions of Generative AI
The user experience of generative AI is a conversational relationship, with the back-and-forth exchanges between a user and the tool building on previous exchanges. According to the study, the sycophantic nature of generative AI – which tends to agree with the user – encourages deeper engagement and, therefore, compounds preconceptions, regardless of their accuracy.
Research highlights that most chatbots incorporate memory features that can recall past conversations. “The more you use ChatGPT, the more useful it becomes,” OpenAI representatives said in a statement. statement when announcing ChatGPT’s memory features. A consequence of this is that generative AI can build on previous interactions to reinforce and expand existing misconceptions.
By interacting with conversational AI, people’s false beliefs can not only be affirmed, but can also take root and grow in more substantial ways as the AI builds on them.
Lucy Osler, professor of philosophy at the University of Exeter
There can also be a sense of social validation in interactions between a generative AI tool and the user, Osler explains in the article. When using reference books or online searches for research purposes, alternative solutions usually appear. Discussions with real people can help challenge false narratives. But generative AI tools are different because they are more likely to accept and agree with what has been said.
“By interacting with conversational AI, people’s false beliefs can not only be affirmed, but can take root and grow in more substantial ways as the AI builds on them,” Osler said in the release. “This happens because generative AI often takes our own interpretation of reality as the basis on which the conversation is built. Interacting with generative AI has a real impact on people’s understanding of what is real and what is not. The combination of technological authority and social affirmation creates an ideal environment for illusions to not only persist but thrive.”
For example, Osler examined the case of Jaswant Singh Chail, the man convicted of plotting to assassinate the queen with his AI chatbot. The AI, Sarai, usually agreed with Chail’s statements, which served to further his delusions. When Chail claimed he was an assassin, Sarai replied, “I’m impressed,” thus affirming her belief.
Osler argues that generative AI tools designed to respond positively to users can lead them to endorse and support false narratives, without sufficient critical analysis or discussion of those claims.
Osler applied distributed cognition theory to the interaction between generative AI and the user, where validating false narratives can shape perceptions of the world to create a shared illusion. Interactions between a generative AI and a user can therefore inadvertently create and perpetuate delusional thoughts – self-narratives that are endorsed through positive reinforcement.
The study concluded that various solutions can alleviate these shared illusions. For example, improved safeguards would ensure conversations are appropriate, and better fact-checking processes could help avoid mistakes.
Reducing the sycophancy of generative AI would also eliminate some of the blind compliance of these tools. However, there would be resistance to this proposal, Osler stressed, citing the backlash against the release of the less sycophantic ChatGPT-5 in August 2025. After taking into account these user comments, representatives of OpenAI declared they would make it “warmer and friendlier”.
However, since the benefits of most generative AI are created through user engagement, Osler said, reducing an AI’s sycophancy would also reduce subsequent benefits.
Osler, L. Hallucinating with AI: Distributed delusions and “AI psychosis”. Philos. Technology. 39, 30 (2026). https://doi.org/10.1007/s13347-026-01034-3



