A Wikipedia Group Made a Guide to Detect AI Writing. Now a Plug-In Uses It to ‘Humanize’ Chatbots

Saturday, technology Entrepreneur Siqi Chen has released an open source plugin for Anthropic’s Claude Code AI assistant that tells the AI model to stop writing like an AI model.
Called Humanizer, the simple prompt plugin provides Claude with a list of 24 language and formatting patterns that Wikipedia editors have listed as chatbot freebies. Chen released the plugin on GitHub, where it garnered more than 1,600 stars as of Monday.
“It’s really handy that Wikipedia has put together a detailed list of ‘AI writing signs,'” Chen wrote on X. “So much so that you can just tell your LLM to… not do that.”
The source material is a guide from WikiProject AI Cleanup, a group of Wikipedia editors who have been researching AI-generated articles since late 2023. French Wikipedia editor Ilyas Lebleu founded the project. Volunteers marked more than 500 articles for review and, in August 2025, published an official list of trends they continued to observe.
Chen’s tool is a “skills file” for Claude Code, Anthropic’s terminal coding assistant, which involves a Markdown-formatted file that adds a list of written instructions (you can see them here) appended to the prompt fed into the large language model that powers the assistant. Unlike a normal system prompt, for example, skill information is formatted in a standardized way that Claude models are fine-tuned to interpret more accurately than a simple system prompt. (Custom skills require a paid Claude subscription with code execution enabled.)
But as with all AI prompts, the language models don’t always follow the skill files perfectly, so does the Humanizer actually work? In our limited testing, Chen’s skill file made the AI agent’s output less accurate and more casual, but it might have some downsides: It won’t improve reality and could hurt coding ability.
In particular, some Humanizer instructions may mislead you, depending on the task. For example, the Humanizer skill includes this line: “Have opinions. Don’t just report facts: react to them. ‘I really don’t know how to feel about this’ is more humane than neutrally listing pros and cons.” Although being imperfect seems human, this kind of advice probably wouldn’t do you any favors if you were using Claude to write technical documentation.
Even with its drawbacks, it’s ironic that one of the most referenced sets of rules on the web for detecting AI-assisted writing can help some people subvert it.
Spot the patterns
So what does AI writing look like? The Wikipedia guide is specific with many examples, but we’ll only give you one here for brevity.
Some chatbots love to energize their topics with phrases like “mark a pivotal moment” or “bear witness to,” according to the guide. They write like tourist brochures, calling the views “breathtaking” and describing the towns as “nestled in” picturesque regions. They add “-ing” phrases at the end of sentences to sound analytical: “symbolizing the region’s commitment to innovation.”
To get around these rules, the Humanizer skill asks Claude to replace inflated language with simple facts and offers this example of transformation:
Before: “The Statistical Institute of Catalonia was officially created in 1989, marking a pivotal moment in the evolution of regional statistics in Spain. »
After: “The Statistical Institute of Catalonia was created in 1989 to collect and publish regional statistics. »
Claude will read this and do his best as a pattern machine to create an outcome that fits the context of the conversation or task at hand.
Why AI write detection fails
Even with such a safe set of rules developed by Wikipedia editors, we have already explained why AI writing detectors do not work reliably: there is nothing inherently unique about human writing that reliably differentiates it from LLM writing.
One reason is that while most AI language models tend toward certain types of language, they can also be asked to avoid them, such as with the Humanizer skill. (Although it’s sometimes very difficult, as OpenAI discovered during its years-long struggle with the em dash.)


