The Pope’s Warnings About AI Were AI-Generated, a Detection Tool Claims

Monday, a A brand new Reddit account has appeared on the widely read forum r/AmItheAsshole, where users have their personal disputes arbitrated by strangers. This particular user asked if he had crossed a line by “refusing to watch my mother-in-law’s kids because I have my own job and responsibilities.” The message itself was succinct, simple and grammatically clear, explaining a situation in which the person’s mother-in-law and father were often expected to take care of the children without notice, ultimately leading to an argument.
“Now there’s tension at home, and I’m starting to wonder if I handled it badly,” the editor concluded. “I understand that raising children is stressful, but I also feel like I shouldn’t be forced to take on this responsibility when it’s not my role.” Responses to this person were largely supportive: The children were not theirs, many responded, and leaving the house would be the best solution.
But according to AI detection software developed by Pangram Labs, which claims an accuracy rate of 99.98% and a false positive rate of just one in 10,000, the original story of family discord was generated by AI.
I’ve seen it flagged as AI content while scrolling thanks to the latest version of Pangram’s Chrome extension, which is released to the public this week; At the $20 per month paid tier, the tool analyzes posts on social sites including Reddit, The analysis also includes a measure of Pangram’s confidence in the conclusion: low, medium, or high.
Researchers have found AI garbage all over online. This undermines both journalism and social platforms. Text generated at least in part by AI will account for more than a third of all new websites in 2025, according to a study published this month by researchers at Stanford University, Imperial College London and the Internet Archive. (The researchers used previous Pangram tools to arrive at their conclusions.)
It’s this mess that Max Spero, CEO of Pangram and self-proclaimed “trash janitor,” wants to help clean up. He tells WIRED that adding instant scanning to the company’s browser extension gives users a more transparent way to check AI content on sites they frequent.
“By providing proactive controls, it can be much more helpful to people who typically worry about not seeing the rubble,” Spero says. “It’s very complicated to paste text into an external tool. People just aren’t going to do that.”
Of course, made-up scenarios are nothing out of the ordinary on subreddits like r/AmItheAsshole, where trolls are known for posting engagement bait consisting of particularly absurd fiction. Yet even an informed reader may not suspect that a relatively banal account like the one described above is potentially false. (The editor who shared it did not respond to a request for comment about whether he used AI or what he hoped to achieve with the post, which he later deleted.)
Although no AI detection system is perfect, Pangram’s is considered the most consistent and accurate by third-party researchers at several universities; a 2025 University of Chicago study auditing AI detection software gave Pangram its highest rating and noted that its false positive rate was almost zero, especially on longer runs. Spero claims that one of the reasons it outperforms its competitors is that it is trained in part on “harder examples that are closer to the boundary between AI and human.” I was unable to get it to generate a false positive when testing on articles published in WIRED.
:max_bytes(150000):strip_icc()/HDC-GettyImages-2183906267-70c23127f5ec498bb4e9bb6dd5d5b258.jpg?w=390&resize=390,220&ssl=1)


